I0408 23:36:04.746594 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0408 23:36:04.746812 7 e2e.go:124] Starting e2e run "ce5685a5-65b4-47d8-af76-e40748af99cd" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586388963 - Will randomize all specs Will run 275 of 4992 specs Apr 8 23:36:04.800: INFO: >>> kubeConfig: /root/.kube/config Apr 8 23:36:04.803: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 8 23:36:04.823: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 8 23:36:04.849: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 8 23:36:04.849: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 8 23:36:04.849: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 8 23:36:04.860: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 8 23:36:04.860: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 8 23:36:04.860: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 8 23:36:04.865: INFO: kube-apiserver version: v1.17.0 Apr 8 23:36:04.865: INFO: >>> kubeConfig: /root/.kube/config Apr 8 23:36:04.877: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:36:04.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Apr 8 23:36:04.944: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:36:08.999: INFO: Waiting up to 5m0s for pod "client-envvars-99820f95-f723-444f-8f34-53ed187a3c3f" in namespace "pods-8295" to be "Succeeded or Failed" Apr 8 23:36:09.043: INFO: Pod "client-envvars-99820f95-f723-444f-8f34-53ed187a3c3f": Phase="Pending", Reason="", readiness=false. Elapsed: 43.941358ms Apr 8 23:36:11.097: INFO: Pod "client-envvars-99820f95-f723-444f-8f34-53ed187a3c3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09830168s Apr 8 23:36:13.101: INFO: Pod "client-envvars-99820f95-f723-444f-8f34-53ed187a3c3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102345343s STEP: Saw pod success Apr 8 23:36:13.101: INFO: Pod "client-envvars-99820f95-f723-444f-8f34-53ed187a3c3f" satisfied condition "Succeeded or Failed" Apr 8 23:36:13.104: INFO: Trying to get logs from node latest-worker2 pod client-envvars-99820f95-f723-444f-8f34-53ed187a3c3f container env3cont: STEP: delete the pod Apr 8 23:36:13.170: INFO: Waiting for pod client-envvars-99820f95-f723-444f-8f34-53ed187a3c3f to disappear Apr 8 23:36:13.294: INFO: Pod client-envvars-99820f95-f723-444f-8f34-53ed187a3c3f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:36:13.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8295" for this suite. • [SLOW TEST:8.432 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":17,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:36:13.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 8 23:36:13.350: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:36:13.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5107" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":2,"skipped":19,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:36:13.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 8 23:36:13.495: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:36:30.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-377" for this suite. • [SLOW TEST:16.883 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":3,"skipped":22,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:36:30.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 8 23:36:30.451: INFO: Waiting up to 5m0s for pod "pod-0fa1e5d8-bab1-42a0-b026-5487d98574bb" in namespace "emptydir-1330" to be "Succeeded or Failed" Apr 8 23:36:30.462: INFO: Pod "pod-0fa1e5d8-bab1-42a0-b026-5487d98574bb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.331825ms Apr 8 23:36:32.466: INFO: Pod "pod-0fa1e5d8-bab1-42a0-b026-5487d98574bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014967398s Apr 8 23:36:34.470: INFO: Pod "pod-0fa1e5d8-bab1-42a0-b026-5487d98574bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019226604s STEP: Saw pod success Apr 8 23:36:34.470: INFO: Pod "pod-0fa1e5d8-bab1-42a0-b026-5487d98574bb" satisfied condition "Succeeded or Failed" Apr 8 23:36:34.474: INFO: Trying to get logs from node latest-worker pod pod-0fa1e5d8-bab1-42a0-b026-5487d98574bb container test-container: STEP: delete the pod Apr 8 23:36:34.508: INFO: Waiting for pod pod-0fa1e5d8-bab1-42a0-b026-5487d98574bb to disappear Apr 8 23:36:34.527: INFO: Pod pod-0fa1e5d8-bab1-42a0-b026-5487d98574bb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:36:34.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1330" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":22,"failed":0} SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:36:34.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 8 23:36:35.134: INFO: created pod pod-service-account-defaultsa Apr 8 23:36:35.134: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 8 23:36:35.142: INFO: created pod pod-service-account-mountsa Apr 8 23:36:35.142: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 8 23:36:35.166: INFO: created pod pod-service-account-nomountsa Apr 8 23:36:35.166: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 8 23:36:35.178: INFO: created pod pod-service-account-defaultsa-mountspec Apr 8 23:36:35.178: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 8 23:36:35.251: INFO: created pod pod-service-account-mountsa-mountspec Apr 8 23:36:35.251: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 8 23:36:35.257: INFO: created pod pod-service-account-nomountsa-mountspec Apr 8 23:36:35.257: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 8 23:36:35.262: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 8 23:36:35.262: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 8 23:36:35.276: INFO: created pod pod-service-account-mountsa-nomountspec Apr 8 23:36:35.276: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 8 23:36:35.313: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 8 23:36:35.313: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:36:35.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5124" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":5,"skipped":31,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:36:35.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-85ab0224-4be6-4592-8864-44c9a823903f STEP: Creating a pod to test consume secrets Apr 8 23:36:35.659: INFO: Waiting up to 5m0s for pod "pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7" in namespace "secrets-8376" to be "Succeeded or Failed" Apr 8 23:36:35.686: INFO: Pod "pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 27.846946ms Apr 8 23:36:38.043: INFO: Pod "pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384908135s Apr 8 23:36:40.115: INFO: Pod "pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.456181107s Apr 8 23:36:42.583: INFO: Pod "pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.924551315s Apr 8 23:36:44.630: INFO: Pod "pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.971930867s Apr 8 23:36:46.635: INFO: Pod "pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7": Phase="Running", Reason="", readiness=true. Elapsed: 10.976191858s Apr 8 23:36:48.639: INFO: Pod "pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.98020967s STEP: Saw pod success Apr 8 23:36:48.639: INFO: Pod "pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7" satisfied condition "Succeeded or Failed" Apr 8 23:36:48.642: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7 container secret-volume-test: STEP: delete the pod Apr 8 23:36:48.680: INFO: Waiting for pod pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7 to disappear Apr 8 23:36:48.699: INFO: Pod pod-secrets-909a83a9-44dd-45bf-8600-4880e4b2efa7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:36:48.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8376" for this suite. STEP: Destroying namespace "secret-namespace-6267" for this suite. • [SLOW TEST:13.302 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":53,"failed":0} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:36:48.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:36:48.812: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8516 I0408 23:36:48.831211 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8516, replica count: 1 I0408 23:36:49.881651 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 23:36:50.881858 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 23:36:51.882062 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 23:36:52.882353 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 8 23:36:53.049: INFO: Created: latency-svc-6bckn Apr 8 23:36:53.057: INFO: Got endpoints: latency-svc-6bckn [74.541716ms] Apr 8 23:36:53.087: INFO: Created: latency-svc-6bm82 Apr 8 23:36:53.101: INFO: Got endpoints: latency-svc-6bm82 [44.230793ms] Apr 8 23:36:53.121: INFO: Created: latency-svc-8rrb4 Apr 8 23:36:53.133: INFO: Got endpoints: latency-svc-8rrb4 [76.548647ms] Apr 8 23:36:53.181: INFO: Created: latency-svc-lh74x Apr 8 23:36:53.199: INFO: Created: latency-svc-sxrk6 Apr 8 23:36:53.201: INFO: Got endpoints: latency-svc-lh74x [143.41615ms] Apr 8 23:36:53.211: INFO: Got endpoints: latency-svc-sxrk6 [153.895556ms] Apr 8 23:36:53.229: INFO: Created: latency-svc-7mp4c Apr 8 23:36:53.242: INFO: Got endpoints: latency-svc-7mp4c [185.150622ms] Apr 8 23:36:53.260: INFO: Created: latency-svc-55g5b Apr 8 23:36:53.325: INFO: Got endpoints: latency-svc-55g5b [267.885774ms] Apr 8 23:36:53.338: INFO: Created: latency-svc-smx77 Apr 8 23:36:53.355: INFO: Got endpoints: latency-svc-smx77 [298.403145ms] Apr 8 23:36:53.376: INFO: Created: latency-svc-lrz4x Apr 8 23:36:53.386: INFO: Got endpoints: latency-svc-lrz4x [328.489123ms] Apr 8 23:36:53.493: INFO: Created: latency-svc-gnczm Apr 8 23:36:53.525: INFO: Got endpoints: latency-svc-gnczm [468.079964ms] Apr 8 23:36:53.526: INFO: Created: latency-svc-2grfp Apr 8 23:36:53.544: INFO: Got endpoints: latency-svc-2grfp [487.023365ms] Apr 8 23:36:53.560: INFO: Created: latency-svc-w2qx4 Apr 8 23:36:53.575: INFO: Got endpoints: latency-svc-w2qx4 [518.092186ms] Apr 8 23:36:53.590: INFO: Created: latency-svc-2v9g5 Apr 8 23:36:53.624: INFO: Got endpoints: latency-svc-2v9g5 [566.709889ms] Apr 8 23:36:53.636: INFO: Created: latency-svc-pthdv Apr 8 23:36:53.652: INFO: Got endpoints: latency-svc-pthdv [594.983127ms] Apr 8 23:36:53.673: INFO: Created: latency-svc-cjfzl Apr 8 23:36:53.688: INFO: Got endpoints: latency-svc-cjfzl [630.483923ms] Apr 8 23:36:53.709: INFO: Created: latency-svc-ltm9z Apr 8 23:36:53.774: INFO: Got endpoints: latency-svc-ltm9z [716.984619ms] Apr 8 23:36:53.777: INFO: Created: latency-svc-g748b Apr 8 23:36:53.800: INFO: Got endpoints: latency-svc-g748b [699.205594ms] Apr 8 23:36:53.830: INFO: Created: latency-svc-tkt4x Apr 8 23:36:53.846: INFO: Got endpoints: latency-svc-tkt4x [712.800234ms] Apr 8 23:36:53.870: INFO: Created: latency-svc-p94zw Apr 8 23:36:53.893: INFO: Got endpoints: latency-svc-p94zw [692.428907ms] Apr 8 23:36:53.913: INFO: Created: latency-svc-c8ct9 Apr 8 23:36:53.937: INFO: Got endpoints: latency-svc-c8ct9 [725.54681ms] Apr 8 23:36:53.961: INFO: Created: latency-svc-lnhdl Apr 8 23:36:53.972: INFO: Got endpoints: latency-svc-lnhdl [729.746377ms] Apr 8 23:36:53.991: INFO: Created: latency-svc-4t7hz Apr 8 23:36:54.019: INFO: Got endpoints: latency-svc-4t7hz [694.03176ms] Apr 8 23:36:54.046: INFO: Created: latency-svc-mlbp5 Apr 8 23:36:54.076: INFO: Got endpoints: latency-svc-mlbp5 [720.609672ms] Apr 8 23:36:54.107: INFO: Created: latency-svc-khx5l Apr 8 23:36:54.139: INFO: Got endpoints: latency-svc-khx5l [753.559001ms] Apr 8 23:36:54.183: INFO: Created: latency-svc-8g6s2 Apr 8 23:36:54.203: INFO: Got endpoints: latency-svc-8g6s2 [678.229382ms] Apr 8 23:36:54.277: INFO: Created: latency-svc-lgwdd Apr 8 23:36:54.281: INFO: Got endpoints: latency-svc-lgwdd [736.926284ms] Apr 8 23:36:54.304: INFO: Created: latency-svc-4xb89 Apr 8 23:36:54.317: INFO: Got endpoints: latency-svc-4xb89 [741.686936ms] Apr 8 23:36:54.340: INFO: Created: latency-svc-67zvx Apr 8 23:36:54.355: INFO: Got endpoints: latency-svc-67zvx [730.589773ms] Apr 8 23:36:54.376: INFO: Created: latency-svc-z6l8d Apr 8 23:36:54.408: INFO: Got endpoints: latency-svc-z6l8d [756.168138ms] Apr 8 23:36:54.429: INFO: Created: latency-svc-pqtxb Apr 8 23:36:54.439: INFO: Got endpoints: latency-svc-pqtxb [751.353224ms] Apr 8 23:36:54.500: INFO: Created: latency-svc-z8zgt Apr 8 23:36:54.528: INFO: Got endpoints: latency-svc-z8zgt [753.7455ms] Apr 8 23:36:54.555: INFO: Created: latency-svc-x4pxm Apr 8 23:36:54.566: INFO: Got endpoints: latency-svc-x4pxm [765.165463ms] Apr 8 23:36:54.592: INFO: Created: latency-svc-mxsmb Apr 8 23:36:54.622: INFO: Got endpoints: latency-svc-mxsmb [775.895211ms] Apr 8 23:36:54.684: INFO: Created: latency-svc-qq5dz Apr 8 23:36:54.704: INFO: Got endpoints: latency-svc-qq5dz [810.762747ms] Apr 8 23:36:54.706: INFO: Created: latency-svc-59776 Apr 8 23:36:54.716: INFO: Got endpoints: latency-svc-59776 [778.998554ms] Apr 8 23:36:54.734: INFO: Created: latency-svc-gzmw7 Apr 8 23:36:54.758: INFO: Got endpoints: latency-svc-gzmw7 [785.749604ms] Apr 8 23:36:54.833: INFO: Created: latency-svc-7b7xj Apr 8 23:36:54.838: INFO: Got endpoints: latency-svc-7b7xj [818.795361ms] Apr 8 23:36:54.862: INFO: Created: latency-svc-j5flq Apr 8 23:36:54.874: INFO: Got endpoints: latency-svc-j5flq [798.019475ms] Apr 8 23:36:54.914: INFO: Created: latency-svc-t64cf Apr 8 23:36:54.928: INFO: Got endpoints: latency-svc-t64cf [788.535901ms] Apr 8 23:36:54.985: INFO: Created: latency-svc-qfssc Apr 8 23:36:54.994: INFO: Got endpoints: latency-svc-qfssc [791.016691ms] Apr 8 23:36:55.036: INFO: Created: latency-svc-gzhs8 Apr 8 23:36:55.048: INFO: Got endpoints: latency-svc-gzhs8 [766.559897ms] Apr 8 23:36:55.072: INFO: Created: latency-svc-bmctg Apr 8 23:36:55.115: INFO: Got endpoints: latency-svc-bmctg [797.822753ms] Apr 8 23:36:55.136: INFO: Created: latency-svc-mqc9j Apr 8 23:36:55.150: INFO: Got endpoints: latency-svc-mqc9j [795.717347ms] Apr 8 23:36:55.190: INFO: Created: latency-svc-sd69v Apr 8 23:36:55.206: INFO: Got endpoints: latency-svc-sd69v [797.710853ms] Apr 8 23:36:55.259: INFO: Created: latency-svc-bzwsj Apr 8 23:36:55.262: INFO: Got endpoints: latency-svc-bzwsj [822.107397ms] Apr 8 23:36:55.288: INFO: Created: latency-svc-pvnpv Apr 8 23:36:55.302: INFO: Got endpoints: latency-svc-pvnpv [773.88949ms] Apr 8 23:36:55.323: INFO: Created: latency-svc-hbzx5 Apr 8 23:36:55.338: INFO: Got endpoints: latency-svc-hbzx5 [772.504661ms] Apr 8 23:36:55.384: INFO: Created: latency-svc-r2k8x Apr 8 23:36:55.406: INFO: Got endpoints: latency-svc-r2k8x [783.7206ms] Apr 8 23:36:55.407: INFO: Created: latency-svc-mdbwv Apr 8 23:36:55.416: INFO: Got endpoints: latency-svc-mdbwv [711.668972ms] Apr 8 23:36:55.454: INFO: Created: latency-svc-6bz54 Apr 8 23:36:55.529: INFO: Got endpoints: latency-svc-6bz54 [812.616276ms] Apr 8 23:36:55.568: INFO: Created: latency-svc-khqhq Apr 8 23:36:55.581: INFO: Got endpoints: latency-svc-khqhq [823.173859ms] Apr 8 23:36:55.606: INFO: Created: latency-svc-drkrh Apr 8 23:36:55.617: INFO: Got endpoints: latency-svc-drkrh [778.925912ms] Apr 8 23:36:55.666: INFO: Created: latency-svc-pksvf Apr 8 23:36:55.682: INFO: Got endpoints: latency-svc-pksvf [807.763537ms] Apr 8 23:36:55.713: INFO: Created: latency-svc-6rqx5 Apr 8 23:36:55.726: INFO: Got endpoints: latency-svc-6rqx5 [797.526878ms] Apr 8 23:36:55.749: INFO: Created: latency-svc-bzbwg Apr 8 23:36:55.760: INFO: Got endpoints: latency-svc-bzbwg [766.100592ms] Apr 8 23:36:55.792: INFO: Created: latency-svc-fzfsl Apr 8 23:36:55.815: INFO: Got endpoints: latency-svc-fzfsl [767.392033ms] Apr 8 23:36:55.844: INFO: Created: latency-svc-9r59f Apr 8 23:36:55.859: INFO: Got endpoints: latency-svc-9r59f [744.207755ms] Apr 8 23:36:55.880: INFO: Created: latency-svc-twbb5 Apr 8 23:36:55.917: INFO: Got endpoints: latency-svc-twbb5 [766.826737ms] Apr 8 23:36:55.940: INFO: Created: latency-svc-6gh5h Apr 8 23:36:55.958: INFO: Got endpoints: latency-svc-6gh5h [751.718541ms] Apr 8 23:36:55.977: INFO: Created: latency-svc-q4kjl Apr 8 23:36:55.992: INFO: Got endpoints: latency-svc-q4kjl [730.077133ms] Apr 8 23:36:56.013: INFO: Created: latency-svc-pvt8l Apr 8 23:36:56.043: INFO: Got endpoints: latency-svc-pvt8l [740.844298ms] Apr 8 23:36:56.062: INFO: Created: latency-svc-8tbb4 Apr 8 23:36:56.085: INFO: Got endpoints: latency-svc-8tbb4 [747.107196ms] Apr 8 23:36:56.120: INFO: Created: latency-svc-zhnb9 Apr 8 23:36:56.135: INFO: Got endpoints: latency-svc-zhnb9 [729.327671ms] Apr 8 23:36:56.175: INFO: Created: latency-svc-bznzq Apr 8 23:36:56.180: INFO: Got endpoints: latency-svc-bznzq [764.028189ms] Apr 8 23:36:56.228: INFO: Created: latency-svc-jwsjp Apr 8 23:36:56.246: INFO: Got endpoints: latency-svc-jwsjp [717.649471ms] Apr 8 23:36:56.271: INFO: Created: latency-svc-ksqf5 Apr 8 23:36:56.301: INFO: Got endpoints: latency-svc-ksqf5 [719.676764ms] Apr 8 23:36:56.307: INFO: Created: latency-svc-bfjrp Apr 8 23:36:56.349: INFO: Got endpoints: latency-svc-bfjrp [732.421853ms] Apr 8 23:36:56.384: INFO: Created: latency-svc-ds8tt Apr 8 23:36:56.396: INFO: Got endpoints: latency-svc-ds8tt [714.014822ms] Apr 8 23:36:56.432: INFO: Created: latency-svc-94zsq Apr 8 23:36:56.456: INFO: Got endpoints: latency-svc-94zsq [730.672585ms] Apr 8 23:36:56.487: INFO: Created: latency-svc-gmb82 Apr 8 23:36:56.504: INFO: Got endpoints: latency-svc-gmb82 [743.420623ms] Apr 8 23:36:56.523: INFO: Created: latency-svc-2ksd9 Apr 8 23:36:56.552: INFO: Got endpoints: latency-svc-2ksd9 [736.48701ms] Apr 8 23:36:56.559: INFO: Created: latency-svc-crdw4 Apr 8 23:36:56.572: INFO: Got endpoints: latency-svc-crdw4 [713.103742ms] Apr 8 23:36:56.590: INFO: Created: latency-svc-7m5qf Apr 8 23:36:56.602: INFO: Got endpoints: latency-svc-7m5qf [685.157722ms] Apr 8 23:36:56.624: INFO: Created: latency-svc-b8d98 Apr 8 23:36:56.638: INFO: Got endpoints: latency-svc-b8d98 [680.130469ms] Apr 8 23:36:56.678: INFO: Created: latency-svc-xzqgk Apr 8 23:36:56.702: INFO: Created: latency-svc-w4br5 Apr 8 23:36:56.702: INFO: Got endpoints: latency-svc-xzqgk [709.896824ms] Apr 8 23:36:56.710: INFO: Got endpoints: latency-svc-w4br5 [667.048899ms] Apr 8 23:36:56.740: INFO: Created: latency-svc-9xxkg Apr 8 23:36:56.752: INFO: Got endpoints: latency-svc-9xxkg [666.755158ms] Apr 8 23:36:56.775: INFO: Created: latency-svc-pc5rk Apr 8 23:36:56.823: INFO: Got endpoints: latency-svc-pc5rk [687.772295ms] Apr 8 23:36:56.835: INFO: Created: latency-svc-sfxlw Apr 8 23:36:56.845: INFO: Got endpoints: latency-svc-sfxlw [665.177501ms] Apr 8 23:36:56.864: INFO: Created: latency-svc-6vllb Apr 8 23:36:56.875: INFO: Got endpoints: latency-svc-6vllb [628.63201ms] Apr 8 23:36:56.894: INFO: Created: latency-svc-8m529 Apr 8 23:36:56.905: INFO: Got endpoints: latency-svc-8m529 [604.503689ms] Apr 8 23:36:56.990: INFO: Created: latency-svc-wqkfq Apr 8 23:36:57.002: INFO: Got endpoints: latency-svc-wqkfq [652.113211ms] Apr 8 23:36:57.034: INFO: Created: latency-svc-dsgps Apr 8 23:36:57.049: INFO: Got endpoints: latency-svc-dsgps [653.346946ms] Apr 8 23:36:57.086: INFO: Created: latency-svc-n7wnf Apr 8 23:36:57.121: INFO: Got endpoints: latency-svc-n7wnf [664.796879ms] Apr 8 23:36:57.146: INFO: Created: latency-svc-kcv6t Apr 8 23:36:57.160: INFO: Got endpoints: latency-svc-kcv6t [655.540357ms] Apr 8 23:36:57.201: INFO: Created: latency-svc-2swmq Apr 8 23:36:57.213: INFO: Got endpoints: latency-svc-2swmq [661.498744ms] Apr 8 23:36:57.258: INFO: Created: latency-svc-h9rx2 Apr 8 23:36:57.273: INFO: Got endpoints: latency-svc-h9rx2 [700.596948ms] Apr 8 23:36:57.273: INFO: Created: latency-svc-llfc7 Apr 8 23:36:57.285: INFO: Got endpoints: latency-svc-llfc7 [682.978227ms] Apr 8 23:36:57.303: INFO: Created: latency-svc-45q9n Apr 8 23:36:57.316: INFO: Got endpoints: latency-svc-45q9n [677.274185ms] Apr 8 23:36:57.338: INFO: Created: latency-svc-fvg4w Apr 8 23:36:57.352: INFO: Got endpoints: latency-svc-fvg4w [650.130469ms] Apr 8 23:36:57.408: INFO: Created: latency-svc-nghn9 Apr 8 23:36:57.432: INFO: Got endpoints: latency-svc-nghn9 [722.433848ms] Apr 8 23:36:57.477: INFO: Created: latency-svc-s629q Apr 8 23:36:57.498: INFO: Got endpoints: latency-svc-s629q [746.151325ms] Apr 8 23:36:57.559: INFO: Created: latency-svc-8m9nv Apr 8 23:36:57.585: INFO: Got endpoints: latency-svc-8m9nv [761.486506ms] Apr 8 23:36:57.585: INFO: Created: latency-svc-9mhvw Apr 8 23:36:57.600: INFO: Got endpoints: latency-svc-9mhvw [754.881914ms] Apr 8 23:36:57.619: INFO: Created: latency-svc-72j76 Apr 8 23:36:57.644: INFO: Got endpoints: latency-svc-72j76 [768.684672ms] Apr 8 23:36:57.698: INFO: Created: latency-svc-vfqcc Apr 8 23:36:57.727: INFO: Got endpoints: latency-svc-vfqcc [821.879055ms] Apr 8 23:36:57.728: INFO: Created: latency-svc-qmm8v Apr 8 23:36:57.738: INFO: Got endpoints: latency-svc-qmm8v [736.150826ms] Apr 8 23:36:57.757: INFO: Created: latency-svc-pq27f Apr 8 23:36:57.782: INFO: Got endpoints: latency-svc-pq27f [732.948388ms] Apr 8 23:36:57.870: INFO: Created: latency-svc-5542w Apr 8 23:36:57.891: INFO: Created: latency-svc-4tb9x Apr 8 23:36:57.891: INFO: Got endpoints: latency-svc-5542w [769.661566ms] Apr 8 23:36:57.902: INFO: Got endpoints: latency-svc-4tb9x [742.634263ms] Apr 8 23:36:57.919: INFO: Created: latency-svc-s4rtr Apr 8 23:36:57.932: INFO: Got endpoints: latency-svc-s4rtr [718.842314ms] Apr 8 23:36:57.949: INFO: Created: latency-svc-d4j8h Apr 8 23:36:57.963: INFO: Got endpoints: latency-svc-d4j8h [689.917494ms] Apr 8 23:36:57.989: INFO: Created: latency-svc-pswv9 Apr 8 23:36:57.992: INFO: Got endpoints: latency-svc-pswv9 [706.41649ms] Apr 8 23:36:58.009: INFO: Created: latency-svc-4qvsl Apr 8 23:36:58.035: INFO: Got endpoints: latency-svc-4qvsl [719.015027ms] Apr 8 23:36:58.064: INFO: Created: latency-svc-dx846 Apr 8 23:36:58.074: INFO: Got endpoints: latency-svc-dx846 [721.911763ms] Apr 8 23:36:58.088: INFO: Created: latency-svc-pmnlx Apr 8 23:36:58.115: INFO: Got endpoints: latency-svc-pmnlx [682.355001ms] Apr 8 23:36:58.125: INFO: Created: latency-svc-zkpjw Apr 8 23:36:58.147: INFO: Got endpoints: latency-svc-zkpjw [648.297356ms] Apr 8 23:36:58.178: INFO: Created: latency-svc-6w7c7 Apr 8 23:36:58.194: INFO: Got endpoints: latency-svc-6w7c7 [608.901323ms] Apr 8 23:36:58.213: INFO: Created: latency-svc-kr4zz Apr 8 23:36:58.258: INFO: Got endpoints: latency-svc-kr4zz [658.330717ms] Apr 8 23:36:58.279: INFO: Created: latency-svc-psc4z Apr 8 23:36:58.310: INFO: Got endpoints: latency-svc-psc4z [666.617509ms] Apr 8 23:36:58.340: INFO: Created: latency-svc-b5p5h Apr 8 23:36:58.385: INFO: Got endpoints: latency-svc-b5p5h [657.286791ms] Apr 8 23:36:58.389: INFO: Created: latency-svc-hbh4w Apr 8 23:36:58.400: INFO: Got endpoints: latency-svc-hbh4w [661.765778ms] Apr 8 23:36:58.417: INFO: Created: latency-svc-sfpfq Apr 8 23:36:58.430: INFO: Got endpoints: latency-svc-sfpfq [647.290531ms] Apr 8 23:36:58.447: INFO: Created: latency-svc-m7s9r Apr 8 23:36:58.460: INFO: Got endpoints: latency-svc-m7s9r [568.624079ms] Apr 8 23:36:58.528: INFO: Created: latency-svc-99s8g Apr 8 23:36:58.544: INFO: Got endpoints: latency-svc-99s8g [642.139679ms] Apr 8 23:36:58.545: INFO: Created: latency-svc-zlmrx Apr 8 23:36:58.555: INFO: Got endpoints: latency-svc-zlmrx [623.173788ms] Apr 8 23:36:58.599: INFO: Created: latency-svc-2d8n5 Apr 8 23:36:58.609: INFO: Got endpoints: latency-svc-2d8n5 [646.416048ms] Apr 8 23:36:58.672: INFO: Created: latency-svc-kfw45 Apr 8 23:36:58.705: INFO: Got endpoints: latency-svc-kfw45 [713.164005ms] Apr 8 23:36:58.706: INFO: Created: latency-svc-zzjfk Apr 8 23:36:58.741: INFO: Got endpoints: latency-svc-zzjfk [706.707713ms] Apr 8 23:36:58.768: INFO: Created: latency-svc-7492n Apr 8 23:36:58.822: INFO: Got endpoints: latency-svc-7492n [748.273283ms] Apr 8 23:36:58.823: INFO: Created: latency-svc-qjzgw Apr 8 23:36:58.828: INFO: Got endpoints: latency-svc-qjzgw [713.100194ms] Apr 8 23:36:58.844: INFO: Created: latency-svc-vg44r Apr 8 23:36:58.862: INFO: Got endpoints: latency-svc-vg44r [715.317025ms] Apr 8 23:36:58.891: INFO: Created: latency-svc-7ltsf Apr 8 23:36:58.910: INFO: Got endpoints: latency-svc-7ltsf [716.315649ms] Apr 8 23:36:58.972: INFO: Created: latency-svc-s8zvf Apr 8 23:36:59.017: INFO: Got endpoints: latency-svc-s8zvf [758.404272ms] Apr 8 23:36:59.018: INFO: Created: latency-svc-jhkpn Apr 8 23:36:59.054: INFO: Got endpoints: latency-svc-jhkpn [744.041082ms] Apr 8 23:36:59.115: INFO: Created: latency-svc-mkgxh Apr 8 23:36:59.137: INFO: Created: latency-svc-62mpn Apr 8 23:36:59.137: INFO: Got endpoints: latency-svc-mkgxh [752.523472ms] Apr 8 23:36:59.174: INFO: Got endpoints: latency-svc-62mpn [774.432472ms] Apr 8 23:36:59.215: INFO: Created: latency-svc-kn9sm Apr 8 23:36:59.241: INFO: Got endpoints: latency-svc-kn9sm [810.927595ms] Apr 8 23:36:59.264: INFO: Created: latency-svc-jmxmg Apr 8 23:36:59.294: INFO: Got endpoints: latency-svc-jmxmg [834.735525ms] Apr 8 23:36:59.324: INFO: Created: latency-svc-2s4dl Apr 8 23:36:59.379: INFO: Created: latency-svc-mjd4w Apr 8 23:36:59.379: INFO: Got endpoints: latency-svc-2s4dl [834.620258ms] Apr 8 23:36:59.395: INFO: Got endpoints: latency-svc-mjd4w [839.386259ms] Apr 8 23:36:59.424: INFO: Created: latency-svc-jq55w Apr 8 23:36:59.473: INFO: Got endpoints: latency-svc-jq55w [863.531847ms] Apr 8 23:36:59.528: INFO: Created: latency-svc-8gw7s Apr 8 23:36:59.546: INFO: Created: latency-svc-kj9cd Apr 8 23:36:59.546: INFO: Got endpoints: latency-svc-8gw7s [840.6517ms] Apr 8 23:36:59.560: INFO: Got endpoints: latency-svc-kj9cd [818.295514ms] Apr 8 23:36:59.576: INFO: Created: latency-svc-whhgd Apr 8 23:36:59.600: INFO: Got endpoints: latency-svc-whhgd [777.666016ms] Apr 8 23:36:59.660: INFO: Created: latency-svc-d7g5j Apr 8 23:36:59.683: INFO: Created: latency-svc-5d5wr Apr 8 23:36:59.683: INFO: Got endpoints: latency-svc-d7g5j [855.280281ms] Apr 8 23:36:59.698: INFO: Got endpoints: latency-svc-5d5wr [835.903447ms] Apr 8 23:36:59.720: INFO: Created: latency-svc-9xq7v Apr 8 23:36:59.749: INFO: Got endpoints: latency-svc-9xq7v [838.266985ms] Apr 8 23:36:59.804: INFO: Created: latency-svc-jdpr5 Apr 8 23:36:59.822: INFO: Got endpoints: latency-svc-jdpr5 [805.224817ms] Apr 8 23:36:59.822: INFO: Created: latency-svc-ll6v6 Apr 8 23:36:59.838: INFO: Got endpoints: latency-svc-ll6v6 [783.363821ms] Apr 8 23:36:59.858: INFO: Created: latency-svc-kh8kf Apr 8 23:36:59.888: INFO: Got endpoints: latency-svc-kh8kf [750.842718ms] Apr 8 23:36:59.936: INFO: Created: latency-svc-zgfn4 Apr 8 23:36:59.959: INFO: Got endpoints: latency-svc-zgfn4 [784.376987ms] Apr 8 23:36:59.959: INFO: Created: latency-svc-94pjh Apr 8 23:36:59.970: INFO: Got endpoints: latency-svc-94pjh [728.796663ms] Apr 8 23:36:59.983: INFO: Created: latency-svc-9whmm Apr 8 23:37:00.000: INFO: Got endpoints: latency-svc-9whmm [706.107506ms] Apr 8 23:37:00.026: INFO: Created: latency-svc-qpbh5 Apr 8 23:37:00.061: INFO: Got endpoints: latency-svc-qpbh5 [681.953945ms] Apr 8 23:37:00.074: INFO: Created: latency-svc-dclmv Apr 8 23:37:00.093: INFO: Got endpoints: latency-svc-dclmv [698.169553ms] Apr 8 23:37:00.110: INFO: Created: latency-svc-n2bnw Apr 8 23:37:00.123: INFO: Got endpoints: latency-svc-n2bnw [649.957953ms] Apr 8 23:37:00.140: INFO: Created: latency-svc-5jcnw Apr 8 23:37:00.153: INFO: Got endpoints: latency-svc-5jcnw [606.977909ms] Apr 8 23:37:00.187: INFO: Created: latency-svc-ccv4j Apr 8 23:37:00.201: INFO: Got endpoints: latency-svc-ccv4j [641.473469ms] Apr 8 23:37:00.235: INFO: Created: latency-svc-knn42 Apr 8 23:37:00.261: INFO: Got endpoints: latency-svc-knn42 [661.013312ms] Apr 8 23:37:00.277: INFO: Created: latency-svc-c6zxg Apr 8 23:37:00.338: INFO: Got endpoints: latency-svc-c6zxg [654.406889ms] Apr 8 23:37:00.339: INFO: Created: latency-svc-dqbhd Apr 8 23:37:00.344: INFO: Got endpoints: latency-svc-dqbhd [646.402226ms] Apr 8 23:37:00.368: INFO: Created: latency-svc-792hb Apr 8 23:37:00.384: INFO: Got endpoints: latency-svc-792hb [635.792491ms] Apr 8 23:37:00.408: INFO: Created: latency-svc-75shf Apr 8 23:37:00.425: INFO: Got endpoints: latency-svc-75shf [602.728337ms] Apr 8 23:37:00.474: INFO: Created: latency-svc-76wbl Apr 8 23:37:00.498: INFO: Got endpoints: latency-svc-76wbl [660.60802ms] Apr 8 23:37:00.499: INFO: Created: latency-svc-kmptg Apr 8 23:37:00.509: INFO: Got endpoints: latency-svc-kmptg [620.541818ms] Apr 8 23:37:00.522: INFO: Created: latency-svc-mjtc4 Apr 8 23:37:00.532: INFO: Got endpoints: latency-svc-mjtc4 [573.911297ms] Apr 8 23:37:00.554: INFO: Created: latency-svc-lr8vm Apr 8 23:37:00.569: INFO: Got endpoints: latency-svc-lr8vm [599.101346ms] Apr 8 23:37:00.595: INFO: Created: latency-svc-qczmz Apr 8 23:37:00.611: INFO: Got endpoints: latency-svc-qczmz [610.303384ms] Apr 8 23:37:00.632: INFO: Created: latency-svc-wnrk8 Apr 8 23:37:00.644: INFO: Got endpoints: latency-svc-wnrk8 [583.03462ms] Apr 8 23:37:00.673: INFO: Created: latency-svc-vn7qt Apr 8 23:37:00.692: INFO: Got endpoints: latency-svc-vn7qt [599.334247ms] Apr 8 23:37:00.739: INFO: Created: latency-svc-f2sz7 Apr 8 23:37:00.747: INFO: Got endpoints: latency-svc-f2sz7 [623.620056ms] Apr 8 23:37:00.762: INFO: Created: latency-svc-vsjf4 Apr 8 23:37:00.776: INFO: Got endpoints: latency-svc-vsjf4 [623.189439ms] Apr 8 23:37:00.793: INFO: Created: latency-svc-p8hrx Apr 8 23:37:00.806: INFO: Got endpoints: latency-svc-p8hrx [604.633516ms] Apr 8 23:37:00.882: INFO: Created: latency-svc-5wglq Apr 8 23:37:00.908: INFO: Created: latency-svc-9tj44 Apr 8 23:37:00.908: INFO: Got endpoints: latency-svc-5wglq [646.587898ms] Apr 8 23:37:00.922: INFO: Got endpoints: latency-svc-9tj44 [584.214709ms] Apr 8 23:37:00.936: INFO: Created: latency-svc-5pm7c Apr 8 23:37:00.952: INFO: Got endpoints: latency-svc-5pm7c [607.41898ms] Apr 8 23:37:00.978: INFO: Created: latency-svc-htxht Apr 8 23:37:01.031: INFO: Got endpoints: latency-svc-htxht [646.530894ms] Apr 8 23:37:01.050: INFO: Created: latency-svc-f8vws Apr 8 23:37:01.060: INFO: Got endpoints: latency-svc-f8vws [635.055506ms] Apr 8 23:37:01.086: INFO: Created: latency-svc-l78rb Apr 8 23:37:01.096: INFO: Got endpoints: latency-svc-l78rb [597.817999ms] Apr 8 23:37:01.117: INFO: Created: latency-svc-gpjxg Apr 8 23:37:01.145: INFO: Got endpoints: latency-svc-gpjxg [636.114073ms] Apr 8 23:37:01.165: INFO: Created: latency-svc-gjbqh Apr 8 23:37:01.180: INFO: Got endpoints: latency-svc-gjbqh [647.725174ms] Apr 8 23:37:01.236: INFO: Created: latency-svc-k94k6 Apr 8 23:37:01.277: INFO: Got endpoints: latency-svc-k94k6 [708.001683ms] Apr 8 23:37:01.290: INFO: Created: latency-svc-25z28 Apr 8 23:37:01.303: INFO: Got endpoints: latency-svc-25z28 [692.46799ms] Apr 8 23:37:01.320: INFO: Created: latency-svc-fp7p6 Apr 8 23:37:01.352: INFO: Got endpoints: latency-svc-fp7p6 [707.679459ms] Apr 8 23:37:01.375: INFO: Created: latency-svc-hsf5q Apr 8 23:37:01.420: INFO: Got endpoints: latency-svc-hsf5q [727.890556ms] Apr 8 23:37:01.448: INFO: Created: latency-svc-bvcrv Apr 8 23:37:01.477: INFO: Got endpoints: latency-svc-bvcrv [730.816325ms] Apr 8 23:37:01.518: INFO: Created: latency-svc-ssbb4 Apr 8 23:37:01.552: INFO: Got endpoints: latency-svc-ssbb4 [775.783759ms] Apr 8 23:37:01.579: INFO: Created: latency-svc-vfjnc Apr 8 23:37:01.591: INFO: Got endpoints: latency-svc-vfjnc [785.22712ms] Apr 8 23:37:01.609: INFO: Created: latency-svc-d4lfs Apr 8 23:37:01.623: INFO: Got endpoints: latency-svc-d4lfs [715.514205ms] Apr 8 23:37:01.639: INFO: Created: latency-svc-lspn6 Apr 8 23:37:01.666: INFO: Got endpoints: latency-svc-lspn6 [743.845828ms] Apr 8 23:37:01.681: INFO: Created: latency-svc-mrd6h Apr 8 23:37:01.695: INFO: Got endpoints: latency-svc-mrd6h [743.273874ms] Apr 8 23:37:01.715: INFO: Created: latency-svc-96jw5 Apr 8 23:37:01.731: INFO: Got endpoints: latency-svc-96jw5 [700.222ms] Apr 8 23:37:01.746: INFO: Created: latency-svc-fddsf Apr 8 23:37:01.755: INFO: Got endpoints: latency-svc-fddsf [695.140404ms] Apr 8 23:37:01.798: INFO: Created: latency-svc-97b72 Apr 8 23:37:01.818: INFO: Created: latency-svc-f66w7 Apr 8 23:37:01.818: INFO: Got endpoints: latency-svc-97b72 [721.260478ms] Apr 8 23:37:01.827: INFO: Got endpoints: latency-svc-f66w7 [682.133116ms] Apr 8 23:37:01.848: INFO: Created: latency-svc-8ghng Apr 8 23:37:01.860: INFO: Got endpoints: latency-svc-8ghng [680.17324ms] Apr 8 23:37:01.879: INFO: Created: latency-svc-hfzwq Apr 8 23:37:01.947: INFO: Got endpoints: latency-svc-hfzwq [670.379368ms] Apr 8 23:37:01.951: INFO: Created: latency-svc-cz4bd Apr 8 23:37:01.969: INFO: Got endpoints: latency-svc-cz4bd [665.178728ms] Apr 8 23:37:01.986: INFO: Created: latency-svc-frlmh Apr 8 23:37:01.998: INFO: Got endpoints: latency-svc-frlmh [646.249792ms] Apr 8 23:37:02.015: INFO: Created: latency-svc-tcqcw Apr 8 23:37:02.040: INFO: Got endpoints: latency-svc-tcqcw [619.290612ms] Apr 8 23:37:02.091: INFO: Created: latency-svc-s4d9n Apr 8 23:37:02.101: INFO: Got endpoints: latency-svc-s4d9n [622.970918ms] Apr 8 23:37:02.119: INFO: Created: latency-svc-xkmz8 Apr 8 23:37:02.130: INFO: Got endpoints: latency-svc-xkmz8 [578.543464ms] Apr 8 23:37:02.149: INFO: Created: latency-svc-x6hst Apr 8 23:37:02.173: INFO: Got endpoints: latency-svc-x6hst [581.949543ms] Apr 8 23:37:02.223: INFO: Created: latency-svc-zgwmb Apr 8 23:37:02.243: INFO: Got endpoints: latency-svc-zgwmb [619.98228ms] Apr 8 23:37:02.244: INFO: Created: latency-svc-c9lvc Apr 8 23:37:02.259: INFO: Got endpoints: latency-svc-c9lvc [592.640963ms] Apr 8 23:37:02.273: INFO: Created: latency-svc-zvv52 Apr 8 23:37:02.291: INFO: Got endpoints: latency-svc-zvv52 [596.017277ms] Apr 8 23:37:02.310: INFO: Created: latency-svc-5szfh Apr 8 23:37:02.319: INFO: Got endpoints: latency-svc-5szfh [587.13421ms] Apr 8 23:37:02.379: INFO: Created: latency-svc-q9z4l Apr 8 23:37:02.395: INFO: Got endpoints: latency-svc-q9z4l [639.415327ms] Apr 8 23:37:02.395: INFO: Created: latency-svc-tcbgh Apr 8 23:37:02.419: INFO: Got endpoints: latency-svc-tcbgh [601.192573ms] Apr 8 23:37:02.441: INFO: Created: latency-svc-blxwn Apr 8 23:37:02.454: INFO: Got endpoints: latency-svc-blxwn [627.136561ms] Apr 8 23:37:02.454: INFO: Latencies: [44.230793ms 76.548647ms 143.41615ms 153.895556ms 185.150622ms 267.885774ms 298.403145ms 328.489123ms 468.079964ms 487.023365ms 518.092186ms 566.709889ms 568.624079ms 573.911297ms 578.543464ms 581.949543ms 583.03462ms 584.214709ms 587.13421ms 592.640963ms 594.983127ms 596.017277ms 597.817999ms 599.101346ms 599.334247ms 601.192573ms 602.728337ms 604.503689ms 604.633516ms 606.977909ms 607.41898ms 608.901323ms 610.303384ms 619.290612ms 619.98228ms 620.541818ms 622.970918ms 623.173788ms 623.189439ms 623.620056ms 627.136561ms 628.63201ms 630.483923ms 635.055506ms 635.792491ms 636.114073ms 639.415327ms 641.473469ms 642.139679ms 646.249792ms 646.402226ms 646.416048ms 646.530894ms 646.587898ms 647.290531ms 647.725174ms 648.297356ms 649.957953ms 650.130469ms 652.113211ms 653.346946ms 654.406889ms 655.540357ms 657.286791ms 658.330717ms 660.60802ms 661.013312ms 661.498744ms 661.765778ms 664.796879ms 665.177501ms 665.178728ms 666.617509ms 666.755158ms 667.048899ms 670.379368ms 677.274185ms 678.229382ms 680.130469ms 680.17324ms 681.953945ms 682.133116ms 682.355001ms 682.978227ms 685.157722ms 687.772295ms 689.917494ms 692.428907ms 692.46799ms 694.03176ms 695.140404ms 698.169553ms 699.205594ms 700.222ms 700.596948ms 706.107506ms 706.41649ms 706.707713ms 707.679459ms 708.001683ms 709.896824ms 711.668972ms 712.800234ms 713.100194ms 713.103742ms 713.164005ms 714.014822ms 715.317025ms 715.514205ms 716.315649ms 716.984619ms 717.649471ms 718.842314ms 719.015027ms 719.676764ms 720.609672ms 721.260478ms 721.911763ms 722.433848ms 725.54681ms 727.890556ms 728.796663ms 729.327671ms 729.746377ms 730.077133ms 730.589773ms 730.672585ms 730.816325ms 732.421853ms 732.948388ms 736.150826ms 736.48701ms 736.926284ms 740.844298ms 741.686936ms 742.634263ms 743.273874ms 743.420623ms 743.845828ms 744.041082ms 744.207755ms 746.151325ms 747.107196ms 748.273283ms 750.842718ms 751.353224ms 751.718541ms 752.523472ms 753.559001ms 753.7455ms 754.881914ms 756.168138ms 758.404272ms 761.486506ms 764.028189ms 765.165463ms 766.100592ms 766.559897ms 766.826737ms 767.392033ms 768.684672ms 769.661566ms 772.504661ms 773.88949ms 774.432472ms 775.783759ms 775.895211ms 777.666016ms 778.925912ms 778.998554ms 783.363821ms 783.7206ms 784.376987ms 785.22712ms 785.749604ms 788.535901ms 791.016691ms 795.717347ms 797.526878ms 797.710853ms 797.822753ms 798.019475ms 805.224817ms 807.763537ms 810.762747ms 810.927595ms 812.616276ms 818.295514ms 818.795361ms 821.879055ms 822.107397ms 823.173859ms 834.620258ms 834.735525ms 835.903447ms 838.266985ms 839.386259ms 840.6517ms 855.280281ms 863.531847ms] Apr 8 23:37:02.455: INFO: 50 %ile: 709.896824ms Apr 8 23:37:02.455: INFO: 90 %ile: 797.822753ms Apr 8 23:37:02.455: INFO: 99 %ile: 855.280281ms Apr 8 23:37:02.455: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:37:02.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8516" for this suite. • [SLOW TEST:13.704 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":7,"skipped":54,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:37:02.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:37:02.549: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 8 23:37:07.552: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 8 23:37:07.553: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 8 23:37:11.654: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5536 /apis/apps/v1/namespaces/deployment-5536/deployments/test-cleanup-deployment 5c5a8bdd-556c-4f25-a835-71df8fef9d74 6531215 1 2020-04-08 23:37:07 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003183fd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-08 23:37:07 +0000 UTC,LastTransitionTime:2020-04-08 23:37:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-577c77b589" has successfully progressed.,LastUpdateTime:2020-04-08 23:37:11 +0000 UTC,LastTransitionTime:2020-04-08 23:37:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 8 23:37:11.703: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-5536 /apis/apps/v1/namespaces/deployment-5536/replicasets/test-cleanup-deployment-577c77b589 66d88e60-1aa3-45b5-807b-e0fceb36dd82 6531202 1 2020-04-08 23:37:07 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 5c5a8bdd-556c-4f25-a835-71df8fef9d74 0xc004dd2097 0xc004dd2098}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004dd2108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 8 23:37:11.713: INFO: Pod "test-cleanup-deployment-577c77b589-c9h4p" is available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-c9h4p test-cleanup-deployment-577c77b589- deployment-5536 /api/v1/namespaces/deployment-5536/pods/test-cleanup-deployment-577c77b589-c9h4p 54b99684-c1aa-419a-a4d0-e033366a2d1d 6531201 0 2020-04-08 23:37:07 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 66d88e60-1aa3-45b5-807b-e0fceb36dd82 0xc0031d2407 0xc0031d2408}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5ccmg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5ccmg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5ccmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:37:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:37:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:37:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.33,StartTime:2020-04-08 23:37:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 23:37:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://94a8d952f0e7897fb6aaa738147f80bf6c3a6d3b49f4af02829bd8010ccddd35,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:37:11.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5536" for this suite. • [SLOW TEST:9.260 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":8,"skipped":57,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:37:11.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-2900 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2900 to expose endpoints map[] Apr 8 23:37:12.098: INFO: successfully validated that service endpoint-test2 in namespace services-2900 exposes endpoints map[] (12.797253ms elapsed) STEP: Creating pod pod1 in namespace services-2900 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2900 to expose endpoints map[pod1:[80]] Apr 8 23:37:15.374: INFO: successfully validated that service endpoint-test2 in namespace services-2900 exposes endpoints map[pod1:[80]] (3.251333143s elapsed) STEP: Creating pod pod2 in namespace services-2900 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2900 to expose endpoints map[pod1:[80] pod2:[80]] Apr 8 23:37:19.837: INFO: successfully validated that service endpoint-test2 in namespace services-2900 exposes endpoints map[pod1:[80] pod2:[80]] (4.350518126s elapsed) STEP: Deleting pod pod1 in namespace services-2900 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2900 to expose endpoints map[pod2:[80]] Apr 8 23:37:19.916: INFO: successfully validated that service endpoint-test2 in namespace services-2900 exposes endpoints map[pod2:[80]] (72.619757ms elapsed) STEP: Deleting pod pod2 in namespace services-2900 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2900 to expose endpoints map[] Apr 8 23:37:20.000: INFO: successfully validated that service endpoint-test2 in namespace services-2900 exposes endpoints map[] (55.001181ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:37:20.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2900" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.563 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":9,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:37:20.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:37:20.583: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 8 23:37:25.600: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 8 23:37:25.600: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 8 23:37:27.684: INFO: Creating deployment "test-rollover-deployment" Apr 8 23:37:27.748: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 8 23:37:29.755: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 8 23:37:29.761: INFO: Ensure that both replica sets have 1 created replica Apr 8 23:37:29.784: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 8 23:37:29.791: INFO: Updating deployment test-rollover-deployment Apr 8 23:37:29.791: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 8 23:37:31.810: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 8 23:37:31.822: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 8 23:37:31.826: INFO: all replica sets need to contain the pod-template-hash label Apr 8 23:37:31.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985849, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 23:37:33.834: INFO: all replica sets need to contain the pod-template-hash label Apr 8 23:37:33.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985853, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 23:37:35.834: INFO: all replica sets need to contain the pod-template-hash label Apr 8 23:37:35.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985853, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 23:37:37.834: INFO: all replica sets need to contain the pod-template-hash label Apr 8 23:37:37.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985853, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 23:37:39.833: INFO: all replica sets need to contain the pod-template-hash label Apr 8 23:37:39.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985853, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 23:37:41.833: INFO: all replica sets need to contain the pod-template-hash label Apr 8 23:37:41.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985853, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985847, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 23:37:43.833: INFO: Apr 8 23:37:43.833: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 8 23:37:43.840: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6009 /apis/apps/v1/namespaces/deployment-6009/deployments/test-rollover-deployment d53711c1-f4c6-4c30-a2eb-502a34d12e5a 6531966 2 2020-04-08 23:37:27 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002bcd618 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-08 23:37:27 +0000 UTC,LastTransitionTime:2020-04-08 23:37:27 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-08 23:37:43 +0000 UTC,LastTransitionTime:2020-04-08 23:37:27 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 8 23:37:43.843: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-6009 /apis/apps/v1/namespaces/deployment-6009/replicasets/test-rollover-deployment-78df7bc796 6975d626-d4df-486b-8af5-b7c792178165 6531953 2 2020-04-08 23:37:29 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment d53711c1-f4c6-4c30-a2eb-502a34d12e5a 0xc000545197 0xc000545198}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000545228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 8 23:37:43.843: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 8 23:37:43.843: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6009 /apis/apps/v1/namespaces/deployment-6009/replicasets/test-rollover-controller c68170c0-110f-4615-b321-b81a50456a87 6531964 2 2020-04-08 23:37:20 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment d53711c1-f4c6-4c30-a2eb-502a34d12e5a 0xc000544177 0xc000544178}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000545078 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 23:37:43.843: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6009 /apis/apps/v1/namespaces/deployment-6009/replicasets/test-rollover-deployment-f6c94f66c 0fda96ec-116f-40eb-b871-41bf9d9b2c1c 6531901 2 2020-04-08 23:37:27 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment d53711c1-f4c6-4c30-a2eb-502a34d12e5a 0xc000545290 0xc000545291}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000545328 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 23:37:43.846: INFO: Pod "test-rollover-deployment-78df7bc796-r4z9p" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-r4z9p test-rollover-deployment-78df7bc796- deployment-6009 /api/v1/namespaces/deployment-6009/pods/test-rollover-deployment-78df7bc796-r4z9p d98f38aa-8b29-4d4d-8703-27ee9e4cb242 6531921 0 2020-04-08 23:37:29 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 6975d626-d4df-486b-8af5-b7c792178165 0xc0008f2587 0xc0008f2588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sg2f7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sg2f7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sg2f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:37:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:37:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:37:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:37:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.72,StartTime:2020-04-08 23:37:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 23:37:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://b649e29983cee591a0485baa2f3883b6ed057b4d6c7b2ce628bf0f56ae524071,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.72,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:37:43.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6009" for this suite. • [SLOW TEST:23.564 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":10,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:37:43.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 23:37:44.352: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 23:37:46.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985864, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985864, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985864, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721985864, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 23:37:49.392: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:37:49.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3833" for this suite. STEP: Destroying namespace "webhook-3833-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.976 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":11,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:37:49.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2193 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 8 23:37:49.886: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 8 23:37:49.929: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 8 23:37:51.932: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 8 23:37:53.933: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 23:37:55.932: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 23:37:57.933: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 23:37:59.933: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 23:38:01.933: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 23:38:03.933: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 23:38:05.933: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 8 23:38:05.939: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 8 23:38:07.944: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 8 23:38:11.979: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.37 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2193 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 23:38:11.979: INFO: >>> kubeConfig: /root/.kube/config I0408 23:38:12.035497 7 log.go:172] (0xc001358790) (0xc001ed8b40) Create stream I0408 23:38:12.035529 7 log.go:172] (0xc001358790) (0xc001ed8b40) Stream added, broadcasting: 1 I0408 23:38:12.037525 7 log.go:172] (0xc001358790) Reply frame received for 1 I0408 23:38:12.037568 7 log.go:172] (0xc001358790) (0xc001ed8be0) Create stream I0408 23:38:12.037578 7 log.go:172] (0xc001358790) (0xc001ed8be0) Stream added, broadcasting: 3 I0408 23:38:12.038661 7 log.go:172] (0xc001358790) Reply frame received for 3 I0408 23:38:12.038704 7 log.go:172] (0xc001358790) (0xc0014a9b80) Create stream I0408 23:38:12.038716 7 log.go:172] (0xc001358790) (0xc0014a9b80) Stream added, broadcasting: 5 I0408 23:38:12.039709 7 log.go:172] (0xc001358790) Reply frame received for 5 I0408 23:38:13.125481 7 log.go:172] (0xc001358790) Data frame received for 3 I0408 23:38:13.125541 7 log.go:172] (0xc001ed8be0) (3) Data frame handling I0408 23:38:13.125575 7 log.go:172] (0xc001ed8be0) (3) Data frame sent I0408 23:38:13.125972 7 log.go:172] (0xc001358790) Data frame received for 3 I0408 23:38:13.126019 7 log.go:172] (0xc001ed8be0) (3) Data frame handling I0408 23:38:13.126167 7 log.go:172] (0xc001358790) Data frame received for 5 I0408 23:38:13.126252 7 log.go:172] (0xc0014a9b80) (5) Data frame handling I0408 23:38:13.128347 7 log.go:172] (0xc001358790) Data frame received for 1 I0408 23:38:13.128370 7 log.go:172] (0xc001ed8b40) (1) Data frame handling I0408 23:38:13.128391 7 log.go:172] (0xc001ed8b40) (1) Data frame sent I0408 23:38:13.128420 7 log.go:172] (0xc001358790) (0xc001ed8b40) Stream removed, broadcasting: 1 I0408 23:38:13.128451 7 log.go:172] (0xc001358790) Go away received I0408 23:38:13.129005 7 log.go:172] (0xc001358790) (0xc001ed8b40) Stream removed, broadcasting: 1 I0408 23:38:13.129029 7 log.go:172] (0xc001358790) (0xc001ed8be0) Stream removed, broadcasting: 3 I0408 23:38:13.129043 7 log.go:172] (0xc001358790) (0xc0014a9b80) Stream removed, broadcasting: 5 Apr 8 23:38:13.129: INFO: Found all expected endpoints: [netserver-0] Apr 8 23:38:13.133: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.74 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2193 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 23:38:13.133: INFO: >>> kubeConfig: /root/.kube/config I0408 23:38:13.165867 7 log.go:172] (0xc0013aa840) (0xc0013345a0) Create stream I0408 23:38:13.165898 7 log.go:172] (0xc0013aa840) (0xc0013345a0) Stream added, broadcasting: 1 I0408 23:38:13.168037 7 log.go:172] (0xc0013aa840) Reply frame received for 1 I0408 23:38:13.168068 7 log.go:172] (0xc0013aa840) (0xc0014ff720) Create stream I0408 23:38:13.168079 7 log.go:172] (0xc0013aa840) (0xc0014ff720) Stream added, broadcasting: 3 I0408 23:38:13.169362 7 log.go:172] (0xc0013aa840) Reply frame received for 3 I0408 23:38:13.169407 7 log.go:172] (0xc0013aa840) (0xc0014ff7c0) Create stream I0408 23:38:13.169425 7 log.go:172] (0xc0013aa840) (0xc0014ff7c0) Stream added, broadcasting: 5 I0408 23:38:13.170408 7 log.go:172] (0xc0013aa840) Reply frame received for 5 I0408 23:38:14.265098 7 log.go:172] (0xc0013aa840) Data frame received for 3 I0408 23:38:14.265304 7 log.go:172] (0xc0014ff720) (3) Data frame handling I0408 23:38:14.265351 7 log.go:172] (0xc0014ff720) (3) Data frame sent I0408 23:38:14.265379 7 log.go:172] (0xc0013aa840) Data frame received for 3 I0408 23:38:14.265410 7 log.go:172] (0xc0013aa840) Data frame received for 5 I0408 23:38:14.265440 7 log.go:172] (0xc0014ff7c0) (5) Data frame handling I0408 23:38:14.265464 7 log.go:172] (0xc0014ff720) (3) Data frame handling I0408 23:38:14.267322 7 log.go:172] (0xc0013aa840) Data frame received for 1 I0408 23:38:14.267351 7 log.go:172] (0xc0013345a0) (1) Data frame handling I0408 23:38:14.267381 7 log.go:172] (0xc0013345a0) (1) Data frame sent I0408 23:38:14.267406 7 log.go:172] (0xc0013aa840) (0xc0013345a0) Stream removed, broadcasting: 1 I0408 23:38:14.267434 7 log.go:172] (0xc0013aa840) Go away received I0408 23:38:14.267565 7 log.go:172] (0xc0013aa840) (0xc0013345a0) Stream removed, broadcasting: 1 I0408 23:38:14.267598 7 log.go:172] (0xc0013aa840) (0xc0014ff720) Stream removed, broadcasting: 3 I0408 23:38:14.267613 7 log.go:172] (0xc0013aa840) (0xc0014ff7c0) Stream removed, broadcasting: 5 Apr 8 23:38:14.267: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:38:14.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2193" for this suite. • [SLOW TEST:24.446 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:38:14.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 8 23:38:14.324: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 23:38:14.346: INFO: Waiting for terminating namespaces to be deleted... Apr 8 23:38:14.349: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 8 23:38:14.364: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:38:14.364: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 23:38:14.364: INFO: netserver-0 from pod-network-test-2193 started at 2020-04-08 23:37:49 +0000 UTC (1 container statuses recorded) Apr 8 23:38:14.364: INFO: Container webserver ready: true, restart count 0 Apr 8 23:38:14.364: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:38:14.364: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 23:38:14.364: INFO: host-test-container-pod from pod-network-test-2193 started at 2020-04-08 23:38:07 +0000 UTC (1 container statuses recorded) Apr 8 23:38:14.364: INFO: Container agnhost ready: true, restart count 0 Apr 8 23:38:14.364: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 8 23:38:14.371: INFO: test-container-pod from pod-network-test-2193 started at 2020-04-08 23:38:07 +0000 UTC (1 container statuses recorded) Apr 8 23:38:14.371: INFO: Container webserver ready: true, restart count 0 Apr 8 23:38:14.371: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:38:14.371: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 23:38:14.371: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:38:14.371: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 23:38:14.371: INFO: netserver-1 from pod-network-test-2193 started at 2020-04-08 23:37:49 +0000 UTC (1 container statuses recorded) Apr 8 23:38:14.371: INFO: Container webserver ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-708f7f29-5ba8-49df-a1a0-8fe94cce2fee 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-708f7f29-5ba8-49df-a1a0-8fe94cce2fee off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-708f7f29-5ba8-49df-a1a0-8fe94cce2fee [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:43:22.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1756" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.263 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":13,"skipped":166,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:43:22.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0408 23:43:34.115789 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 23:43:34.115: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:43:34.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5816" for this suite. • [SLOW TEST:11.583 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":14,"skipped":185,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:43:34.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 8 23:43:38.271: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:43:38.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1142" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":195,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:43:38.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-53fc0c18-ee6a-444e-9819-b08c0be2ef39 Apr 8 23:43:38.421: INFO: Pod name my-hostname-basic-53fc0c18-ee6a-444e-9819-b08c0be2ef39: Found 0 pods out of 1 Apr 8 23:43:43.424: INFO: Pod name my-hostname-basic-53fc0c18-ee6a-444e-9819-b08c0be2ef39: Found 1 pods out of 1 Apr 8 23:43:43.424: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-53fc0c18-ee6a-444e-9819-b08c0be2ef39" are running Apr 8 23:43:43.426: INFO: Pod "my-hostname-basic-53fc0c18-ee6a-444e-9819-b08c0be2ef39-r42ch" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 23:43:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 23:43:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 23:43:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 23:43:38 +0000 UTC Reason: Message:}]) Apr 8 23:43:43.426: INFO: Trying to dial the pod Apr 8 23:43:48.438: INFO: Controller my-hostname-basic-53fc0c18-ee6a-444e-9819-b08c0be2ef39: Got expected result from replica 1 [my-hostname-basic-53fc0c18-ee6a-444e-9819-b08c0be2ef39-r42ch]: "my-hostname-basic-53fc0c18-ee6a-444e-9819-b08c0be2ef39-r42ch", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:43:48.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6996" for this suite. • [SLOW TEST:10.140 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":16,"skipped":205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:43:48.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 8 23:43:48.507: INFO: Waiting up to 5m0s for pod "pod-7dd26336-e37b-457e-bc26-4dbfb3f2d2f6" in namespace "emptydir-4755" to be "Succeeded or Failed" Apr 8 23:43:48.510: INFO: Pod "pod-7dd26336-e37b-457e-bc26-4dbfb3f2d2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.745736ms Apr 8 23:43:50.514: INFO: Pod "pod-7dd26336-e37b-457e-bc26-4dbfb3f2d2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006382743s Apr 8 23:43:52.518: INFO: Pod "pod-7dd26336-e37b-457e-bc26-4dbfb3f2d2f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010302583s STEP: Saw pod success Apr 8 23:43:52.518: INFO: Pod "pod-7dd26336-e37b-457e-bc26-4dbfb3f2d2f6" satisfied condition "Succeeded or Failed" Apr 8 23:43:52.520: INFO: Trying to get logs from node latest-worker pod pod-7dd26336-e37b-457e-bc26-4dbfb3f2d2f6 container test-container: STEP: delete the pod Apr 8 23:43:52.554: INFO: Waiting for pod pod-7dd26336-e37b-457e-bc26-4dbfb3f2d2f6 to disappear Apr 8 23:43:52.606: INFO: Pod pod-7dd26336-e37b-457e-bc26-4dbfb3f2d2f6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:43:52.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4755" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":253,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:43:52.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:43:56.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4837" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":278,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:43:56.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 8 23:43:56.800: INFO: Created pod &Pod{ObjectMeta:{dns-6955 dns-6955 /api/v1/namespaces/dns-6955/pods/dns-6955 1102334e-c97c-4a16-9b9c-972e6968a8e7 6533551 0 2020-04-08 23:43:56 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xb84g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xb84g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xb84g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:43:56.827: INFO: The status of Pod dns-6955 is Pending, waiting for it to be Running (with Ready = true) Apr 8 23:43:58.846: INFO: The status of Pod dns-6955 is Pending, waiting for it to be Running (with Ready = true) Apr 8 23:44:00.831: INFO: The status of Pod dns-6955 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 8 23:44:00.831: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6955 PodName:dns-6955 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 23:44:00.831: INFO: >>> kubeConfig: /root/.kube/config I0408 23:44:00.868422 7 log.go:172] (0xc0044d2840) (0xc001c03b80) Create stream I0408 23:44:00.868464 7 log.go:172] (0xc0044d2840) (0xc001c03b80) Stream added, broadcasting: 1 I0408 23:44:00.871061 7 log.go:172] (0xc0044d2840) Reply frame received for 1 I0408 23:44:00.871109 7 log.go:172] (0xc0044d2840) (0xc0018d6960) Create stream I0408 23:44:00.871131 7 log.go:172] (0xc0044d2840) (0xc0018d6960) Stream added, broadcasting: 3 I0408 23:44:00.872091 7 log.go:172] (0xc0044d2840) Reply frame received for 3 I0408 23:44:00.872139 7 log.go:172] (0xc0044d2840) (0xc001c03c20) Create stream I0408 23:44:00.872161 7 log.go:172] (0xc0044d2840) (0xc001c03c20) Stream added, broadcasting: 5 I0408 23:44:00.873430 7 log.go:172] (0xc0044d2840) Reply frame received for 5 I0408 23:44:00.963668 7 log.go:172] (0xc0044d2840) Data frame received for 3 I0408 23:44:00.963701 7 log.go:172] (0xc0018d6960) (3) Data frame handling I0408 23:44:00.963720 7 log.go:172] (0xc0018d6960) (3) Data frame sent I0408 23:44:00.965756 7 log.go:172] (0xc0044d2840) Data frame received for 3 I0408 23:44:00.965811 7 log.go:172] (0xc0018d6960) (3) Data frame handling I0408 23:44:00.965839 7 log.go:172] (0xc0044d2840) Data frame received for 5 I0408 23:44:00.965855 7 log.go:172] (0xc001c03c20) (5) Data frame handling I0408 23:44:00.967367 7 log.go:172] (0xc0044d2840) Data frame received for 1 I0408 23:44:00.967392 7 log.go:172] (0xc001c03b80) (1) Data frame handling I0408 23:44:00.967407 7 log.go:172] (0xc001c03b80) (1) Data frame sent I0408 23:44:00.967420 7 log.go:172] (0xc0044d2840) (0xc001c03b80) Stream removed, broadcasting: 1 I0408 23:44:00.967434 7 log.go:172] (0xc0044d2840) Go away received I0408 23:44:00.967557 7 log.go:172] (0xc0044d2840) (0xc001c03b80) Stream removed, broadcasting: 1 I0408 23:44:00.967578 7 log.go:172] (0xc0044d2840) (0xc0018d6960) Stream removed, broadcasting: 3 I0408 23:44:00.967593 7 log.go:172] (0xc0044d2840) (0xc001c03c20) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 8 23:44:00.967: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6955 PodName:dns-6955 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 23:44:00.967: INFO: >>> kubeConfig: /root/.kube/config I0408 23:44:00.992426 7 log.go:172] (0xc0044d2e70) (0xc001c03f40) Create stream I0408 23:44:00.992464 7 log.go:172] (0xc0044d2e70) (0xc001c03f40) Stream added, broadcasting: 1 I0408 23:44:00.994743 7 log.go:172] (0xc0044d2e70) Reply frame received for 1 I0408 23:44:00.994789 7 log.go:172] (0xc0044d2e70) (0xc001928820) Create stream I0408 23:44:00.994809 7 log.go:172] (0xc0044d2e70) (0xc001928820) Stream added, broadcasting: 3 I0408 23:44:00.995943 7 log.go:172] (0xc0044d2e70) Reply frame received for 3 I0408 23:44:00.995990 7 log.go:172] (0xc0044d2e70) (0xc001558000) Create stream I0408 23:44:00.996001 7 log.go:172] (0xc0044d2e70) (0xc001558000) Stream added, broadcasting: 5 I0408 23:44:00.997259 7 log.go:172] (0xc0044d2e70) Reply frame received for 5 I0408 23:44:01.075122 7 log.go:172] (0xc0044d2e70) Data frame received for 3 I0408 23:44:01.075158 7 log.go:172] (0xc001928820) (3) Data frame handling I0408 23:44:01.075178 7 log.go:172] (0xc001928820) (3) Data frame sent I0408 23:44:01.076303 7 log.go:172] (0xc0044d2e70) Data frame received for 5 I0408 23:44:01.076350 7 log.go:172] (0xc001558000) (5) Data frame handling I0408 23:44:01.076382 7 log.go:172] (0xc0044d2e70) Data frame received for 3 I0408 23:44:01.076422 7 log.go:172] (0xc001928820) (3) Data frame handling I0408 23:44:01.078461 7 log.go:172] (0xc0044d2e70) Data frame received for 1 I0408 23:44:01.078496 7 log.go:172] (0xc001c03f40) (1) Data frame handling I0408 23:44:01.078528 7 log.go:172] (0xc001c03f40) (1) Data frame sent I0408 23:44:01.078559 7 log.go:172] (0xc0044d2e70) (0xc001c03f40) Stream removed, broadcasting: 1 I0408 23:44:01.078590 7 log.go:172] (0xc0044d2e70) Go away received I0408 23:44:01.078703 7 log.go:172] (0xc0044d2e70) (0xc001c03f40) Stream removed, broadcasting: 1 I0408 23:44:01.078730 7 log.go:172] (0xc0044d2e70) (0xc001928820) Stream removed, broadcasting: 3 I0408 23:44:01.078743 7 log.go:172] (0xc0044d2e70) (0xc001558000) Stream removed, broadcasting: 5 Apr 8 23:44:01.078: INFO: Deleting pod dns-6955... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:44:01.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6955" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":19,"skipped":290,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:44:01.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-2733/secret-test-4eaaaefd-4902-43db-9d76-c4ec0fde5fae STEP: Creating a pod to test consume secrets Apr 8 23:44:01.183: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9c4223b-10c0-4cb4-b0ae-cac63d8ef1f9" in namespace "secrets-2733" to be "Succeeded or Failed" Apr 8 23:44:01.187: INFO: Pod "pod-configmaps-b9c4223b-10c0-4cb4-b0ae-cac63d8ef1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.846858ms Apr 8 23:44:03.191: INFO: Pod "pod-configmaps-b9c4223b-10c0-4cb4-b0ae-cac63d8ef1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007607813s Apr 8 23:44:05.195: INFO: Pod "pod-configmaps-b9c4223b-10c0-4cb4-b0ae-cac63d8ef1f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012000248s STEP: Saw pod success Apr 8 23:44:05.195: INFO: Pod "pod-configmaps-b9c4223b-10c0-4cb4-b0ae-cac63d8ef1f9" satisfied condition "Succeeded or Failed" Apr 8 23:44:05.199: INFO: Trying to get logs from node latest-worker pod pod-configmaps-b9c4223b-10c0-4cb4-b0ae-cac63d8ef1f9 container env-test: STEP: delete the pod Apr 8 23:44:05.219: INFO: Waiting for pod pod-configmaps-b9c4223b-10c0-4cb4-b0ae-cac63d8ef1f9 to disappear Apr 8 23:44:05.223: INFO: Pod pod-configmaps-b9c4223b-10c0-4cb4-b0ae-cac63d8ef1f9 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:44:05.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2733" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":293,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:44:05.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-ndxd STEP: Creating a pod to test atomic-volume-subpath Apr 8 23:44:05.333: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ndxd" in namespace "subpath-9384" to be "Succeeded or Failed" Apr 8 23:44:05.374: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Pending", Reason="", readiness=false. Elapsed: 40.882618ms Apr 8 23:44:07.377: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043535469s Apr 8 23:44:09.383: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Running", Reason="", readiness=true. Elapsed: 4.049658255s Apr 8 23:44:11.387: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Running", Reason="", readiness=true. Elapsed: 6.053567622s Apr 8 23:44:13.390: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Running", Reason="", readiness=true. Elapsed: 8.05721291s Apr 8 23:44:15.394: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Running", Reason="", readiness=true. Elapsed: 10.061016934s Apr 8 23:44:17.399: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Running", Reason="", readiness=true. Elapsed: 12.065540554s Apr 8 23:44:19.403: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Running", Reason="", readiness=true. Elapsed: 14.069993421s Apr 8 23:44:21.407: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Running", Reason="", readiness=true. Elapsed: 16.074131734s Apr 8 23:44:23.410: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Running", Reason="", readiness=true. Elapsed: 18.077215012s Apr 8 23:44:25.414: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Running", Reason="", readiness=true. Elapsed: 20.080725667s Apr 8 23:44:27.418: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Running", Reason="", readiness=true. Elapsed: 22.084825217s Apr 8 23:44:29.439: INFO: Pod "pod-subpath-test-configmap-ndxd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.105694284s STEP: Saw pod success Apr 8 23:44:29.439: INFO: Pod "pod-subpath-test-configmap-ndxd" satisfied condition "Succeeded or Failed" Apr 8 23:44:29.442: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-ndxd container test-container-subpath-configmap-ndxd: STEP: delete the pod Apr 8 23:44:29.476: INFO: Waiting for pod pod-subpath-test-configmap-ndxd to disappear Apr 8 23:44:29.480: INFO: Pod pod-subpath-test-configmap-ndxd no longer exists STEP: Deleting pod pod-subpath-test-configmap-ndxd Apr 8 23:44:29.480: INFO: Deleting pod "pod-subpath-test-configmap-ndxd" in namespace "subpath-9384" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:44:29.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9384" for this suite. • [SLOW TEST:24.257 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":21,"skipped":305,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:44:29.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:44:29.536: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:44:33.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3967" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":324,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:44:33.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6139 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-6139 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6139 Apr 8 23:44:33.721: INFO: Found 0 stateful pods, waiting for 1 Apr 8 23:44:43.726: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 8 23:44:43.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6139 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 23:44:46.209: INFO: stderr: "I0408 23:44:46.088241 52 log.go:172] (0xc00003a0b0) (0xc0006ef4a0) Create stream\nI0408 23:44:46.088313 52 log.go:172] (0xc00003a0b0) (0xc0006ef4a0) Stream added, broadcasting: 1\nI0408 23:44:46.091077 52 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0408 23:44:46.091138 52 log.go:172] (0xc00003a0b0) (0xc0006ef540) Create stream\nI0408 23:44:46.091165 52 log.go:172] (0xc00003a0b0) (0xc0006ef540) Stream added, broadcasting: 3\nI0408 23:44:46.092088 52 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0408 23:44:46.092123 52 log.go:172] (0xc00003a0b0) (0xc0006ef5e0) Create stream\nI0408 23:44:46.092136 52 log.go:172] (0xc00003a0b0) (0xc0006ef5e0) Stream added, broadcasting: 5\nI0408 23:44:46.093090 52 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0408 23:44:46.171093 52 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0408 23:44:46.171123 52 log.go:172] (0xc0006ef5e0) (5) Data frame handling\nI0408 23:44:46.171157 52 log.go:172] (0xc0006ef5e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 23:44:46.201042 52 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0408 23:44:46.201078 52 log.go:172] (0xc0006ef540) (3) Data frame handling\nI0408 23:44:46.201097 52 log.go:172] (0xc0006ef540) (3) Data frame sent\nI0408 23:44:46.201495 52 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0408 23:44:46.201544 52 log.go:172] (0xc0006ef540) (3) Data frame handling\nI0408 23:44:46.201582 52 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0408 23:44:46.201612 52 log.go:172] (0xc0006ef5e0) (5) Data frame handling\nI0408 23:44:46.203735 52 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0408 23:44:46.203777 52 log.go:172] (0xc0006ef4a0) (1) Data frame handling\nI0408 23:44:46.203822 52 log.go:172] (0xc0006ef4a0) (1) Data frame sent\nI0408 23:44:46.203851 52 log.go:172] (0xc00003a0b0) (0xc0006ef4a0) Stream removed, broadcasting: 1\nI0408 23:44:46.203890 52 log.go:172] (0xc00003a0b0) Go away received\nI0408 23:44:46.204372 52 log.go:172] (0xc00003a0b0) (0xc0006ef4a0) Stream removed, broadcasting: 1\nI0408 23:44:46.204413 52 log.go:172] (0xc00003a0b0) (0xc0006ef540) Stream removed, broadcasting: 3\nI0408 23:44:46.204439 52 log.go:172] (0xc00003a0b0) (0xc0006ef5e0) Stream removed, broadcasting: 5\n" Apr 8 23:44:46.209: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 23:44:46.209: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 23:44:46.213: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 8 23:44:56.218: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 8 23:44:56.218: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 23:44:56.237: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 23:44:56.237: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC }] Apr 8 23:44:56.237: INFO: Apr 8 23:44:56.237: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 8 23:44:57.278: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989968656s Apr 8 23:44:58.285: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.949887781s Apr 8 23:44:59.321: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.942697976s Apr 8 23:45:00.325: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.90647462s Apr 8 23:45:01.340: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.902086616s Apr 8 23:45:02.344: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.887042692s Apr 8 23:45:03.350: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.883009214s Apr 8 23:45:04.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.877318306s Apr 8 23:45:05.358: INFO: Verifying statefulset ss doesn't scale past 3 for another 874.161633ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6139 Apr 8 23:45:06.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6139 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 23:45:06.608: INFO: stderr: "I0408 23:45:06.505777 77 log.go:172] (0xc000a1e000) (0xc0007e3400) Create stream\nI0408 23:45:06.505851 77 log.go:172] (0xc000a1e000) (0xc0007e3400) Stream added, broadcasting: 1\nI0408 23:45:06.508731 77 log.go:172] (0xc000a1e000) Reply frame received for 1\nI0408 23:45:06.508779 77 log.go:172] (0xc000a1e000) (0xc0009c2000) Create stream\nI0408 23:45:06.508792 77 log.go:172] (0xc000a1e000) (0xc0009c2000) Stream added, broadcasting: 3\nI0408 23:45:06.510197 77 log.go:172] (0xc000a1e000) Reply frame received for 3\nI0408 23:45:06.510241 77 log.go:172] (0xc000a1e000) (0xc0009c20a0) Create stream\nI0408 23:45:06.510256 77 log.go:172] (0xc000a1e000) (0xc0009c20a0) Stream added, broadcasting: 5\nI0408 23:45:06.511324 77 log.go:172] (0xc000a1e000) Reply frame received for 5\nI0408 23:45:06.601386 77 log.go:172] (0xc000a1e000) Data frame received for 5\nI0408 23:45:06.601424 77 log.go:172] (0xc0009c20a0) (5) Data frame handling\nI0408 23:45:06.601440 77 log.go:172] (0xc0009c20a0) (5) Data frame sent\nI0408 23:45:06.601453 77 log.go:172] (0xc000a1e000) Data frame received for 5\nI0408 23:45:06.601465 77 log.go:172] (0xc0009c20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 23:45:06.601493 77 log.go:172] (0xc000a1e000) Data frame received for 3\nI0408 23:45:06.601514 77 log.go:172] (0xc0009c2000) (3) Data frame handling\nI0408 23:45:06.601532 77 log.go:172] (0xc0009c2000) (3) Data frame sent\nI0408 23:45:06.601544 77 log.go:172] (0xc000a1e000) Data frame received for 3\nI0408 23:45:06.601557 77 log.go:172] (0xc0009c2000) (3) Data frame handling\nI0408 23:45:06.603544 77 log.go:172] (0xc000a1e000) Data frame received for 1\nI0408 23:45:06.603581 77 log.go:172] (0xc0007e3400) (1) Data frame handling\nI0408 23:45:06.603612 77 log.go:172] (0xc0007e3400) (1) Data frame sent\nI0408 23:45:06.603647 77 log.go:172] (0xc000a1e000) (0xc0007e3400) Stream removed, broadcasting: 1\nI0408 23:45:06.603682 77 log.go:172] (0xc000a1e000) Go away received\nI0408 23:45:06.604248 77 log.go:172] (0xc000a1e000) (0xc0007e3400) Stream removed, broadcasting: 1\nI0408 23:45:06.604287 77 log.go:172] (0xc000a1e000) (0xc0009c2000) Stream removed, broadcasting: 3\nI0408 23:45:06.604311 77 log.go:172] (0xc000a1e000) (0xc0009c20a0) Stream removed, broadcasting: 5\n" Apr 8 23:45:06.609: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 23:45:06.609: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 23:45:06.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6139 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 23:45:06.871: INFO: stderr: "I0408 23:45:06.803433 98 log.go:172] (0xc000a52160) (0xc0008be000) Create stream\nI0408 23:45:06.803498 98 log.go:172] (0xc000a52160) (0xc0008be000) Stream added, broadcasting: 1\nI0408 23:45:06.806416 98 log.go:172] (0xc000a52160) Reply frame received for 1\nI0408 23:45:06.806474 98 log.go:172] (0xc000a52160) (0xc000a3c000) Create stream\nI0408 23:45:06.806490 98 log.go:172] (0xc000a52160) (0xc000a3c000) Stream added, broadcasting: 3\nI0408 23:45:06.807583 98 log.go:172] (0xc000a52160) Reply frame received for 3\nI0408 23:45:06.807614 98 log.go:172] (0xc000a52160) (0xc0008be0a0) Create stream\nI0408 23:45:06.807624 98 log.go:172] (0xc000a52160) (0xc0008be0a0) Stream added, broadcasting: 5\nI0408 23:45:06.808522 98 log.go:172] (0xc000a52160) Reply frame received for 5\nI0408 23:45:06.864779 98 log.go:172] (0xc000a52160) Data frame received for 5\nI0408 23:45:06.864815 98 log.go:172] (0xc0008be0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0408 23:45:06.864846 98 log.go:172] (0xc000a52160) Data frame received for 3\nI0408 23:45:06.864899 98 log.go:172] (0xc000a3c000) (3) Data frame handling\nI0408 23:45:06.864929 98 log.go:172] (0xc000a3c000) (3) Data frame sent\nI0408 23:45:06.864951 98 log.go:172] (0xc000a52160) Data frame received for 3\nI0408 23:45:06.864973 98 log.go:172] (0xc000a3c000) (3) Data frame handling\nI0408 23:45:06.865006 98 log.go:172] (0xc0008be0a0) (5) Data frame sent\nI0408 23:45:06.865043 98 log.go:172] (0xc000a52160) Data frame received for 5\nI0408 23:45:06.865069 98 log.go:172] (0xc0008be0a0) (5) Data frame handling\nI0408 23:45:06.867025 98 log.go:172] (0xc000a52160) Data frame received for 1\nI0408 23:45:06.867065 98 log.go:172] (0xc0008be000) (1) Data frame handling\nI0408 23:45:06.867099 98 log.go:172] (0xc0008be000) (1) Data frame sent\nI0408 23:45:06.867133 98 log.go:172] (0xc000a52160) (0xc0008be000) Stream removed, broadcasting: 1\nI0408 23:45:06.867164 98 log.go:172] (0xc000a52160) Go away received\nI0408 23:45:06.867445 98 log.go:172] (0xc000a52160) (0xc0008be000) Stream removed, broadcasting: 1\nI0408 23:45:06.867459 98 log.go:172] (0xc000a52160) (0xc000a3c000) Stream removed, broadcasting: 3\nI0408 23:45:06.867465 98 log.go:172] (0xc000a52160) (0xc0008be0a0) Stream removed, broadcasting: 5\n" Apr 8 23:45:06.871: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 23:45:06.871: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 23:45:06.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6139 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 23:45:07.070: INFO: stderr: "I0408 23:45:07.002083 120 log.go:172] (0xc0005d06e0) (0xc000954280) Create stream\nI0408 23:45:07.002140 120 log.go:172] (0xc0005d06e0) (0xc000954280) Stream added, broadcasting: 1\nI0408 23:45:07.005344 120 log.go:172] (0xc0005d06e0) Reply frame received for 1\nI0408 23:45:07.005388 120 log.go:172] (0xc0005d06e0) (0xc0009543c0) Create stream\nI0408 23:45:07.005401 120 log.go:172] (0xc0005d06e0) (0xc0009543c0) Stream added, broadcasting: 3\nI0408 23:45:07.006490 120 log.go:172] (0xc0005d06e0) Reply frame received for 3\nI0408 23:45:07.006540 120 log.go:172] (0xc0005d06e0) (0xc000954460) Create stream\nI0408 23:45:07.006577 120 log.go:172] (0xc0005d06e0) (0xc000954460) Stream added, broadcasting: 5\nI0408 23:45:07.008772 120 log.go:172] (0xc0005d06e0) Reply frame received for 5\nI0408 23:45:07.063709 120 log.go:172] (0xc0005d06e0) Data frame received for 3\nI0408 23:45:07.063753 120 log.go:172] (0xc0009543c0) (3) Data frame handling\nI0408 23:45:07.063768 120 log.go:172] (0xc0009543c0) (3) Data frame sent\nI0408 23:45:07.063779 120 log.go:172] (0xc0005d06e0) Data frame received for 3\nI0408 23:45:07.063788 120 log.go:172] (0xc0009543c0) (3) Data frame handling\nI0408 23:45:07.063821 120 log.go:172] (0xc0005d06e0) Data frame received for 5\nI0408 23:45:07.063831 120 log.go:172] (0xc000954460) (5) Data frame handling\nI0408 23:45:07.063866 120 log.go:172] (0xc000954460) (5) Data frame sent\nI0408 23:45:07.063900 120 log.go:172] (0xc0005d06e0) Data frame received for 5\nI0408 23:45:07.063921 120 log.go:172] (0xc000954460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0408 23:45:07.065527 120 log.go:172] (0xc0005d06e0) Data frame received for 1\nI0408 23:45:07.065568 120 log.go:172] (0xc000954280) (1) Data frame handling\nI0408 23:45:07.065592 120 log.go:172] (0xc000954280) (1) Data frame sent\nI0408 23:45:07.065609 120 log.go:172] (0xc0005d06e0) (0xc000954280) Stream removed, broadcasting: 1\nI0408 23:45:07.065654 120 log.go:172] (0xc0005d06e0) Go away received\nI0408 23:45:07.065970 120 log.go:172] (0xc0005d06e0) (0xc000954280) Stream removed, broadcasting: 1\nI0408 23:45:07.065989 120 log.go:172] (0xc0005d06e0) (0xc0009543c0) Stream removed, broadcasting: 3\nI0408 23:45:07.066001 120 log.go:172] (0xc0005d06e0) (0xc000954460) Stream removed, broadcasting: 5\n" Apr 8 23:45:07.070: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 23:45:07.070: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 23:45:07.075: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 23:45:07.075: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 23:45:07.075: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 8 23:45:07.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6139 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 23:45:07.286: INFO: stderr: "I0408 23:45:07.209406 141 log.go:172] (0xc00003a580) (0xc0005bf680) Create stream\nI0408 23:45:07.209505 141 log.go:172] (0xc00003a580) (0xc0005bf680) Stream added, broadcasting: 1\nI0408 23:45:07.212937 141 log.go:172] (0xc00003a580) Reply frame received for 1\nI0408 23:45:07.212993 141 log.go:172] (0xc00003a580) (0xc0009c2000) Create stream\nI0408 23:45:07.213013 141 log.go:172] (0xc00003a580) (0xc0009c2000) Stream added, broadcasting: 3\nI0408 23:45:07.214070 141 log.go:172] (0xc00003a580) Reply frame received for 3\nI0408 23:45:07.214101 141 log.go:172] (0xc00003a580) (0xc000ade000) Create stream\nI0408 23:45:07.214109 141 log.go:172] (0xc00003a580) (0xc000ade000) Stream added, broadcasting: 5\nI0408 23:45:07.214981 141 log.go:172] (0xc00003a580) Reply frame received for 5\nI0408 23:45:07.278744 141 log.go:172] (0xc00003a580) Data frame received for 5\nI0408 23:45:07.278795 141 log.go:172] (0xc000ade000) (5) Data frame handling\nI0408 23:45:07.278812 141 log.go:172] (0xc000ade000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 23:45:07.278835 141 log.go:172] (0xc00003a580) Data frame received for 3\nI0408 23:45:07.278846 141 log.go:172] (0xc0009c2000) (3) Data frame handling\nI0408 23:45:07.278858 141 log.go:172] (0xc0009c2000) (3) Data frame sent\nI0408 23:45:07.278871 141 log.go:172] (0xc00003a580) Data frame received for 3\nI0408 23:45:07.278883 141 log.go:172] (0xc0009c2000) (3) Data frame handling\nI0408 23:45:07.278898 141 log.go:172] (0xc00003a580) Data frame received for 5\nI0408 23:45:07.278914 141 log.go:172] (0xc000ade000) (5) Data frame handling\nI0408 23:45:07.281501 141 log.go:172] (0xc00003a580) Data frame received for 1\nI0408 23:45:07.281531 141 log.go:172] (0xc0005bf680) (1) Data frame handling\nI0408 23:45:07.281551 141 log.go:172] (0xc0005bf680) (1) Data frame sent\nI0408 23:45:07.281795 141 log.go:172] (0xc00003a580) (0xc0005bf680) Stream removed, broadcasting: 1\nI0408 23:45:07.281853 141 log.go:172] (0xc00003a580) Go away received\nI0408 23:45:07.282251 141 log.go:172] (0xc00003a580) (0xc0005bf680) Stream removed, broadcasting: 1\nI0408 23:45:07.282272 141 log.go:172] (0xc00003a580) (0xc0009c2000) Stream removed, broadcasting: 3\nI0408 23:45:07.282283 141 log.go:172] (0xc00003a580) (0xc000ade000) Stream removed, broadcasting: 5\n" Apr 8 23:45:07.287: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 23:45:07.287: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 23:45:07.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6139 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 23:45:07.514: INFO: stderr: "I0408 23:45:07.402855 161 log.go:172] (0xc0009a40b0) (0xc000310be0) Create stream\nI0408 23:45:07.402897 161 log.go:172] (0xc0009a40b0) (0xc000310be0) Stream added, broadcasting: 1\nI0408 23:45:07.404970 161 log.go:172] (0xc0009a40b0) Reply frame received for 1\nI0408 23:45:07.404999 161 log.go:172] (0xc0009a40b0) (0xc0008d2000) Create stream\nI0408 23:45:07.405007 161 log.go:172] (0xc0009a40b0) (0xc0008d2000) Stream added, broadcasting: 3\nI0408 23:45:07.406033 161 log.go:172] (0xc0009a40b0) Reply frame received for 3\nI0408 23:45:07.406099 161 log.go:172] (0xc0009a40b0) (0xc000667400) Create stream\nI0408 23:45:07.406130 161 log.go:172] (0xc0009a40b0) (0xc000667400) Stream added, broadcasting: 5\nI0408 23:45:07.406949 161 log.go:172] (0xc0009a40b0) Reply frame received for 5\nI0408 23:45:07.475087 161 log.go:172] (0xc0009a40b0) Data frame received for 5\nI0408 23:45:07.475112 161 log.go:172] (0xc000667400) (5) Data frame handling\nI0408 23:45:07.475125 161 log.go:172] (0xc000667400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 23:45:07.508214 161 log.go:172] (0xc0009a40b0) Data frame received for 3\nI0408 23:45:07.508250 161 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0408 23:45:07.508280 161 log.go:172] (0xc0008d2000) (3) Data frame sent\nI0408 23:45:07.508479 161 log.go:172] (0xc0009a40b0) Data frame received for 3\nI0408 23:45:07.508511 161 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0408 23:45:07.508643 161 log.go:172] (0xc0009a40b0) Data frame received for 5\nI0408 23:45:07.508659 161 log.go:172] (0xc000667400) (5) Data frame handling\nI0408 23:45:07.510436 161 log.go:172] (0xc0009a40b0) Data frame received for 1\nI0408 23:45:07.510459 161 log.go:172] (0xc000310be0) (1) Data frame handling\nI0408 23:45:07.510483 161 log.go:172] (0xc000310be0) (1) Data frame sent\nI0408 23:45:07.510497 161 log.go:172] (0xc0009a40b0) (0xc000310be0) Stream removed, broadcasting: 1\nI0408 23:45:07.510608 161 log.go:172] (0xc0009a40b0) Go away received\nI0408 23:45:07.510813 161 log.go:172] (0xc0009a40b0) (0xc000310be0) Stream removed, broadcasting: 1\nI0408 23:45:07.510827 161 log.go:172] (0xc0009a40b0) (0xc0008d2000) Stream removed, broadcasting: 3\nI0408 23:45:07.510834 161 log.go:172] (0xc0009a40b0) (0xc000667400) Stream removed, broadcasting: 5\n" Apr 8 23:45:07.515: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 23:45:07.515: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 23:45:07.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6139 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 23:45:07.760: INFO: stderr: "I0408 23:45:07.653831 184 log.go:172] (0xc0000e8370) (0xc000a08000) Create stream\nI0408 23:45:07.653918 184 log.go:172] (0xc0000e8370) (0xc000a08000) Stream added, broadcasting: 1\nI0408 23:45:07.656974 184 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0408 23:45:07.657029 184 log.go:172] (0xc0000e8370) (0xc0005e4000) Create stream\nI0408 23:45:07.657044 184 log.go:172] (0xc0000e8370) (0xc0005e4000) Stream added, broadcasting: 3\nI0408 23:45:07.658190 184 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0408 23:45:07.658237 184 log.go:172] (0xc0000e8370) (0xc000650000) Create stream\nI0408 23:45:07.658254 184 log.go:172] (0xc0000e8370) (0xc000650000) Stream added, broadcasting: 5\nI0408 23:45:07.659343 184 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0408 23:45:07.723979 184 log.go:172] (0xc0000e8370) Data frame received for 5\nI0408 23:45:07.724033 184 log.go:172] (0xc000650000) (5) Data frame handling\nI0408 23:45:07.724076 184 log.go:172] (0xc000650000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 23:45:07.754284 184 log.go:172] (0xc0000e8370) Data frame received for 3\nI0408 23:45:07.754316 184 log.go:172] (0xc0005e4000) (3) Data frame handling\nI0408 23:45:07.754332 184 log.go:172] (0xc0005e4000) (3) Data frame sent\nI0408 23:45:07.754451 184 log.go:172] (0xc0000e8370) Data frame received for 5\nI0408 23:45:07.754466 184 log.go:172] (0xc000650000) (5) Data frame handling\nI0408 23:45:07.754492 184 log.go:172] (0xc0000e8370) Data frame received for 3\nI0408 23:45:07.754512 184 log.go:172] (0xc0005e4000) (3) Data frame handling\nI0408 23:45:07.756194 184 log.go:172] (0xc0000e8370) Data frame received for 1\nI0408 23:45:07.756209 184 log.go:172] (0xc000a08000) (1) Data frame handling\nI0408 23:45:07.756228 184 log.go:172] (0xc000a08000) (1) Data frame sent\nI0408 23:45:07.756239 184 log.go:172] (0xc0000e8370) (0xc000a08000) Stream removed, broadcasting: 1\nI0408 23:45:07.756272 184 log.go:172] (0xc0000e8370) Go away received\nI0408 23:45:07.756487 184 log.go:172] (0xc0000e8370) (0xc000a08000) Stream removed, broadcasting: 1\nI0408 23:45:07.756498 184 log.go:172] (0xc0000e8370) (0xc0005e4000) Stream removed, broadcasting: 3\nI0408 23:45:07.756504 184 log.go:172] (0xc0000e8370) (0xc000650000) Stream removed, broadcasting: 5\n" Apr 8 23:45:07.760: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 23:45:07.760: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 23:45:07.760: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 23:45:07.786: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 8 23:45:17.795: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 8 23:45:17.795: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 8 23:45:17.795: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 8 23:45:17.820: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 23:45:17.820: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC }] Apr 8 23:45:17.820: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC }] Apr 8 23:45:17.820: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC }] Apr 8 23:45:17.820: INFO: Apr 8 23:45:17.820: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 23:45:18.871: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 23:45:18.871: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC }] Apr 8 23:45:18.871: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC }] Apr 8 23:45:18.871: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC }] Apr 8 23:45:18.871: INFO: Apr 8 23:45:18.871: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 23:45:19.876: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 23:45:19.876: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC }] Apr 8 23:45:19.876: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC }] Apr 8 23:45:19.876: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC }] Apr 8 23:45:19.876: INFO: Apr 8 23:45:19.876: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 23:45:20.882: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 23:45:20.882: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC }] Apr 8 23:45:20.882: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC }] Apr 8 23:45:20.882: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC }] Apr 8 23:45:20.882: INFO: Apr 8 23:45:20.882: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 23:45:21.887: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 23:45:21.887: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:33 +0000 UTC }] Apr 8 23:45:21.887: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC }] Apr 8 23:45:21.887: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC }] Apr 8 23:45:21.887: INFO: Apr 8 23:45:21.887: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 23:45:22.891: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 23:45:22.891: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:45:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 23:44:56 +0000 UTC }] Apr 8 23:45:22.891: INFO: Apr 8 23:45:22.891: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 8 23:45:23.895: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.910543084s Apr 8 23:45:24.900: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.906323195s Apr 8 23:45:25.903: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.90215001s Apr 8 23:45:26.907: INFO: Verifying statefulset ss doesn't scale past 0 for another 898.48055ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6139 Apr 8 23:45:27.911: INFO: Scaling statefulset ss to 0 Apr 8 23:45:27.921: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 8 23:45:27.924: INFO: Deleting all statefulset in ns statefulset-6139 Apr 8 23:45:27.927: INFO: Scaling statefulset ss to 0 Apr 8 23:45:27.934: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 23:45:27.936: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:45:27.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6139" for this suite. • [SLOW TEST:54.335 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":23,"skipped":334,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:45:27.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 8 23:45:28.012: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:45:35.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-331" for this suite. • [SLOW TEST:7.295 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":24,"skipped":347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:45:35.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 8 23:45:39.370: INFO: &Pod{ObjectMeta:{send-events-94f65803-f4ee-4b0f-92a0-d70cd6310ea1 events-7777 /api/v1/namespaces/events-7777/pods/send-events-94f65803-f4ee-4b0f-92a0-d70cd6310ea1 3b86fc12-a98c-4317-9aa8-c9c61fd20fc3 6534170 0 2020-04-08 23:45:35 +0000 UTC map[name:foo time:317979726] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w54l6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w54l6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w54l6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:45:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:45:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:45:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:45:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.86,StartTime:2020-04-08 23:45:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 23:45:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://77255ac83e60caf39354996a2d018b7a9d49d31cca34dbcd3301b9e06a6cc443,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.86,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 8 23:45:41.398: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 8 23:45:43.401: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:45:43.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7777" for this suite. • [SLOW TEST:8.153 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":25,"skipped":396,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:45:43.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:45:54.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9417" for this suite. • [SLOW TEST:11.214 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":26,"skipped":409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:45:54.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:46:10.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2892" for this suite. • [SLOW TEST:16.269 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":27,"skipped":457,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:46:10.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 23:46:10.991: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f3bdaa1-4c90-45fa-9548-b21f90aba0d8" in namespace "projected-7014" to be "Succeeded or Failed" Apr 8 23:46:11.000: INFO: Pod "downwardapi-volume-6f3bdaa1-4c90-45fa-9548-b21f90aba0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.003147ms Apr 8 23:46:13.004: INFO: Pod "downwardapi-volume-6f3bdaa1-4c90-45fa-9548-b21f90aba0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013580594s Apr 8 23:46:15.009: INFO: Pod "downwardapi-volume-6f3bdaa1-4c90-45fa-9548-b21f90aba0d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018248677s STEP: Saw pod success Apr 8 23:46:15.009: INFO: Pod "downwardapi-volume-6f3bdaa1-4c90-45fa-9548-b21f90aba0d8" satisfied condition "Succeeded or Failed" Apr 8 23:46:15.012: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6f3bdaa1-4c90-45fa-9548-b21f90aba0d8 container client-container: STEP: delete the pod Apr 8 23:46:15.049: INFO: Waiting for pod downwardapi-volume-6f3bdaa1-4c90-45fa-9548-b21f90aba0d8 to disappear Apr 8 23:46:15.065: INFO: Pod downwardapi-volume-6f3bdaa1-4c90-45fa-9548-b21f90aba0d8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:46:15.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7014" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":459,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:46:15.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 8 23:46:15.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7476' Apr 8 23:46:15.452: INFO: stderr: "" Apr 8 23:46:15.453: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 8 23:46:16.457: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:46:16.457: INFO: Found 0 / 1 Apr 8 23:46:17.456: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:46:17.456: INFO: Found 0 / 1 Apr 8 23:46:18.457: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:46:18.457: INFO: Found 1 / 1 Apr 8 23:46:18.457: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 8 23:46:18.460: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:46:18.460: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 8 23:46:18.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-vzxcf --namespace=kubectl-7476 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 8 23:46:18.556: INFO: stderr: "" Apr 8 23:46:18.556: INFO: stdout: "pod/agnhost-master-vzxcf patched\n" STEP: checking annotations Apr 8 23:46:18.561: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:46:18.561: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:46:18.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7476" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":29,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:46:18.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-2d354a47-efb5-4f8a-8e84-b550728f82bc STEP: Creating a pod to test consume configMaps Apr 8 23:46:18.682: INFO: Waiting up to 5m0s for pod "pod-configmaps-9cd329e3-3e15-4b0a-abb2-b82d84874071" in namespace "configmap-2328" to be "Succeeded or Failed" Apr 8 23:46:18.692: INFO: Pod "pod-configmaps-9cd329e3-3e15-4b0a-abb2-b82d84874071": Phase="Pending", Reason="", readiness=false. Elapsed: 10.156098ms Apr 8 23:46:20.697: INFO: Pod "pod-configmaps-9cd329e3-3e15-4b0a-abb2-b82d84874071": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015098388s Apr 8 23:46:22.702: INFO: Pod "pod-configmaps-9cd329e3-3e15-4b0a-abb2-b82d84874071": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019825512s STEP: Saw pod success Apr 8 23:46:22.702: INFO: Pod "pod-configmaps-9cd329e3-3e15-4b0a-abb2-b82d84874071" satisfied condition "Succeeded or Failed" Apr 8 23:46:22.705: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-9cd329e3-3e15-4b0a-abb2-b82d84874071 container configmap-volume-test: STEP: delete the pod Apr 8 23:46:22.730: INFO: Waiting for pod pod-configmaps-9cd329e3-3e15-4b0a-abb2-b82d84874071 to disappear Apr 8 23:46:22.735: INFO: Pod pod-configmaps-9cd329e3-3e15-4b0a-abb2-b82d84874071 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:46:22.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2328" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":493,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:46:22.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e8b09eb5-a13f-4dfd-8128-18b28621be61 STEP: Creating a pod to test consume secrets Apr 8 23:46:22.974: INFO: Waiting up to 5m0s for pod "pod-secrets-5064b392-ca5f-451e-8cb8-6f3de078513e" in namespace "secrets-174" to be "Succeeded or Failed" Apr 8 23:46:22.981: INFO: Pod "pod-secrets-5064b392-ca5f-451e-8cb8-6f3de078513e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583133ms Apr 8 23:46:24.985: INFO: Pod "pod-secrets-5064b392-ca5f-451e-8cb8-6f3de078513e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010582339s Apr 8 23:46:26.989: INFO: Pod "pod-secrets-5064b392-ca5f-451e-8cb8-6f3de078513e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014930375s STEP: Saw pod success Apr 8 23:46:26.989: INFO: Pod "pod-secrets-5064b392-ca5f-451e-8cb8-6f3de078513e" satisfied condition "Succeeded or Failed" Apr 8 23:46:26.992: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5064b392-ca5f-451e-8cb8-6f3de078513e container secret-volume-test: STEP: delete the pod Apr 8 23:46:27.058: INFO: Waiting for pod pod-secrets-5064b392-ca5f-451e-8cb8-6f3de078513e to disappear Apr 8 23:46:27.066: INFO: Pod pod-secrets-5064b392-ca5f-451e-8cb8-6f3de078513e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:46:27.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-174" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":498,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:46:27.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-3f349987-76fe-4d97-941c-fbda2e354622 STEP: Creating a pod to test consume secrets Apr 8 23:46:27.134: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0c25539d-f373-44d2-9ceb-0cf26884e9c1" in namespace "projected-3613" to be "Succeeded or Failed" Apr 8 23:46:27.137: INFO: Pod "pod-projected-secrets-0c25539d-f373-44d2-9ceb-0cf26884e9c1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.4003ms Apr 8 23:46:29.141: INFO: Pod "pod-projected-secrets-0c25539d-f373-44d2-9ceb-0cf26884e9c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007155445s Apr 8 23:46:31.145: INFO: Pod "pod-projected-secrets-0c25539d-f373-44d2-9ceb-0cf26884e9c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010942908s STEP: Saw pod success Apr 8 23:46:31.145: INFO: Pod "pod-projected-secrets-0c25539d-f373-44d2-9ceb-0cf26884e9c1" satisfied condition "Succeeded or Failed" Apr 8 23:46:31.148: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-0c25539d-f373-44d2-9ceb-0cf26884e9c1 container projected-secret-volume-test: STEP: delete the pod Apr 8 23:46:31.168: INFO: Waiting for pod pod-projected-secrets-0c25539d-f373-44d2-9ceb-0cf26884e9c1 to disappear Apr 8 23:46:31.170: INFO: Pod pod-projected-secrets-0c25539d-f373-44d2-9ceb-0cf26884e9c1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:46:31.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3613" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":502,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:46:31.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 8 23:46:35.346: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:46:35.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7643" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":503,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:46:35.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-76a94b8e-5fe9-4abb-a233-1dc2c3315d0d STEP: Creating a pod to test consume secrets Apr 8 23:46:35.631: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-baaa6d27-95a4-439e-a9ff-f1ec27ea4021" in namespace "projected-3275" to be "Succeeded or Failed" Apr 8 23:46:35.684: INFO: Pod "pod-projected-secrets-baaa6d27-95a4-439e-a9ff-f1ec27ea4021": Phase="Pending", Reason="", readiness=false. Elapsed: 53.241884ms Apr 8 23:46:37.689: INFO: Pod "pod-projected-secrets-baaa6d27-95a4-439e-a9ff-f1ec27ea4021": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057720224s Apr 8 23:46:39.693: INFO: Pod "pod-projected-secrets-baaa6d27-95a4-439e-a9ff-f1ec27ea4021": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061951573s STEP: Saw pod success Apr 8 23:46:39.693: INFO: Pod "pod-projected-secrets-baaa6d27-95a4-439e-a9ff-f1ec27ea4021" satisfied condition "Succeeded or Failed" Apr 8 23:46:39.696: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-baaa6d27-95a4-439e-a9ff-f1ec27ea4021 container projected-secret-volume-test: STEP: delete the pod Apr 8 23:46:39.737: INFO: Waiting for pod pod-projected-secrets-baaa6d27-95a4-439e-a9ff-f1ec27ea4021 to disappear Apr 8 23:46:39.741: INFO: Pod pod-projected-secrets-baaa6d27-95a4-439e-a9ff-f1ec27ea4021 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:46:39.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3275" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":503,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:46:39.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:46:56.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7054" for this suite. • [SLOW TEST:16.301 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":35,"skipped":506,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:46:56.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 23:46:56.507: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 23:46:58.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986416, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986416, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986416, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986416, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 23:47:01.543: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:47:01.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4901" for this suite. STEP: Destroying namespace "webhook-4901-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.797 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":36,"skipped":515,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:47:01.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:47:06.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3723" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":37,"skipped":533,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:47:06.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-9754 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9754 STEP: Deleting pre-stop pod Apr 8 23:47:19.275: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:47:19.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9754" for this suite. • [SLOW TEST:13.228 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":38,"skipped":539,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:47:19.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-2edad334-73ae-4edf-9563-b6b4042a5120 STEP: Creating a pod to test consume configMaps Apr 8 23:47:19.397: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ced38c30-a241-48b5-a612-90b62b1d3d6e" in namespace "projected-8529" to be "Succeeded or Failed" Apr 8 23:47:19.401: INFO: Pod "pod-projected-configmaps-ced38c30-a241-48b5-a612-90b62b1d3d6e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.80377ms Apr 8 23:47:21.404: INFO: Pod "pod-projected-configmaps-ced38c30-a241-48b5-a612-90b62b1d3d6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007130274s Apr 8 23:47:23.408: INFO: Pod "pod-projected-configmaps-ced38c30-a241-48b5-a612-90b62b1d3d6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011139089s STEP: Saw pod success Apr 8 23:47:23.408: INFO: Pod "pod-projected-configmaps-ced38c30-a241-48b5-a612-90b62b1d3d6e" satisfied condition "Succeeded or Failed" Apr 8 23:47:23.411: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-ced38c30-a241-48b5-a612-90b62b1d3d6e container projected-configmap-volume-test: STEP: delete the pod Apr 8 23:47:23.453: INFO: Waiting for pod pod-projected-configmaps-ced38c30-a241-48b5-a612-90b62b1d3d6e to disappear Apr 8 23:47:23.461: INFO: Pod pod-projected-configmaps-ced38c30-a241-48b5-a612-90b62b1d3d6e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:47:23.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8529" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":542,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:47:23.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-c9e3bc96-efd6-4082-b7cb-e7cc0be1831b STEP: Creating a pod to test consume secrets Apr 8 23:47:23.571: INFO: Waiting up to 5m0s for pod "pod-secrets-fdbfcdbf-148e-4f0a-9b4a-d9bf2db8565c" in namespace "secrets-2994" to be "Succeeded or Failed" Apr 8 23:47:23.575: INFO: Pod "pod-secrets-fdbfcdbf-148e-4f0a-9b4a-d9bf2db8565c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.776032ms Apr 8 23:47:25.582: INFO: Pod "pod-secrets-fdbfcdbf-148e-4f0a-9b4a-d9bf2db8565c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010750081s Apr 8 23:47:27.586: INFO: Pod "pod-secrets-fdbfcdbf-148e-4f0a-9b4a-d9bf2db8565c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015223008s STEP: Saw pod success Apr 8 23:47:27.586: INFO: Pod "pod-secrets-fdbfcdbf-148e-4f0a-9b4a-d9bf2db8565c" satisfied condition "Succeeded or Failed" Apr 8 23:47:27.589: INFO: Trying to get logs from node latest-worker pod pod-secrets-fdbfcdbf-148e-4f0a-9b4a-d9bf2db8565c container secret-env-test: STEP: delete the pod Apr 8 23:47:27.617: INFO: Waiting for pod pod-secrets-fdbfcdbf-148e-4f0a-9b4a-d9bf2db8565c to disappear Apr 8 23:47:27.629: INFO: Pod pod-secrets-fdbfcdbf-148e-4f0a-9b4a-d9bf2db8565c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:47:27.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2994" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":556,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:47:27.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 8 23:47:32.236: INFO: Successfully updated pod "labelsupdated3331098-b454-4123-9917-4a4f5c05ddc0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:47:34.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9870" for this suite. • [SLOW TEST:6.605 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":570,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:47:34.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 8 23:47:34.317: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 23:47:34.355: INFO: Waiting for terminating namespaces to be deleted... Apr 8 23:47:34.358: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 8 23:47:34.363: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:47:34.363: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 23:47:34.363: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:47:34.363: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 23:47:34.363: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 8 23:47:34.368: INFO: labelsupdated3331098-b454-4123-9917-4a4f5c05ddc0 from downward-api-9870 started at 2020-04-08 23:47:27 +0000 UTC (1 container statuses recorded) Apr 8 23:47:34.368: INFO: Container client-container ready: true, restart count 0 Apr 8 23:47:34.368: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:47:34.368: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 23:47:34.368: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:47:34.368: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 23:47:34.368: INFO: tester from prestop-9754 started at 2020-04-08 23:47:10 +0000 UTC (1 container statuses recorded) Apr 8 23:47:34.368: INFO: Container tester ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 8 23:47:34.433: INFO: Pod labelsupdated3331098-b454-4123-9917-4a4f5c05ddc0 requesting resource cpu=0m on Node latest-worker2 Apr 8 23:47:34.433: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 8 23:47:34.433: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 8 23:47:34.433: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 8 23:47:34.433: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker Apr 8 23:47:34.433: INFO: Pod tester requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Apr 8 23:47:34.433: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 8 23:47:34.440: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-4bf51d24-f4a3-405a-a111-d6aa107b8b0f.1603fcfc5880069b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2614/filler-pod-4bf51d24-f4a3-405a-a111-d6aa107b8b0f to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-4bf51d24-f4a3-405a-a111-d6aa107b8b0f.1603fcfce3bf0d31], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-4bf51d24-f4a3-405a-a111-d6aa107b8b0f.1603fcfd18919b56], Reason = [Created], Message = [Created container filler-pod-4bf51d24-f4a3-405a-a111-d6aa107b8b0f] STEP: Considering event: Type = [Normal], Name = [filler-pod-4bf51d24-f4a3-405a-a111-d6aa107b8b0f.1603fcfd27705acb], Reason = [Started], Message = [Started container filler-pod-4bf51d24-f4a3-405a-a111-d6aa107b8b0f] STEP: Considering event: Type = [Normal], Name = [filler-pod-a2d6cf4a-daa6-43d4-8af2-a3219738a9a6.1603fcfc57c375df], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2614/filler-pod-a2d6cf4a-daa6-43d4-8af2-a3219738a9a6 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-a2d6cf4a-daa6-43d4-8af2-a3219738a9a6.1603fcfc9ea15b33], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a2d6cf4a-daa6-43d4-8af2-a3219738a9a6.1603fcfce2884697], Reason = [Created], Message = [Created container filler-pod-a2d6cf4a-daa6-43d4-8af2-a3219738a9a6] STEP: Considering event: Type = [Normal], Name = [filler-pod-a2d6cf4a-daa6-43d4-8af2-a3219738a9a6.1603fcfd03401e72], Reason = [Started], Message = [Started container filler-pod-a2d6cf4a-daa6-43d4-8af2-a3219738a9a6] STEP: Considering event: Type = [Warning], Name = [additional-pod.1603fcfdbeeb9754], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:47:41.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2614" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:7.322 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":42,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:47:41.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-fw5d STEP: Creating a pod to test atomic-volume-subpath Apr 8 23:47:41.691: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fw5d" in namespace "subpath-8980" to be "Succeeded or Failed" Apr 8 23:47:41.695: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.975938ms Apr 8 23:47:43.699: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007608899s Apr 8 23:47:45.703: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Running", Reason="", readiness=true. Elapsed: 4.012310187s Apr 8 23:47:47.708: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Running", Reason="", readiness=true. Elapsed: 6.016804389s Apr 8 23:47:49.712: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Running", Reason="", readiness=true. Elapsed: 8.020431227s Apr 8 23:47:51.714: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Running", Reason="", readiness=true. Elapsed: 10.023257056s Apr 8 23:47:53.718: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Running", Reason="", readiness=true. Elapsed: 12.02702635s Apr 8 23:47:55.722: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Running", Reason="", readiness=true. Elapsed: 14.030624473s Apr 8 23:47:57.726: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Running", Reason="", readiness=true. Elapsed: 16.034616107s Apr 8 23:47:59.729: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Running", Reason="", readiness=true. Elapsed: 18.038335117s Apr 8 23:48:01.741: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Running", Reason="", readiness=true. Elapsed: 20.049716704s Apr 8 23:48:03.758: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Running", Reason="", readiness=true. Elapsed: 22.067375066s Apr 8 23:48:05.762: INFO: Pod "pod-subpath-test-secret-fw5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.071198571s STEP: Saw pod success Apr 8 23:48:05.762: INFO: Pod "pod-subpath-test-secret-fw5d" satisfied condition "Succeeded or Failed" Apr 8 23:48:05.765: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-fw5d container test-container-subpath-secret-fw5d: STEP: delete the pod Apr 8 23:48:05.807: INFO: Waiting for pod pod-subpath-test-secret-fw5d to disappear Apr 8 23:48:05.817: INFO: Pod pod-subpath-test-secret-fw5d no longer exists STEP: Deleting pod pod-subpath-test-secret-fw5d Apr 8 23:48:05.817: INFO: Deleting pod "pod-subpath-test-secret-fw5d" in namespace "subpath-8980" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:48:05.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8980" for this suite. • [SLOW TEST:24.239 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":43,"skipped":625,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:48:05.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:48:05.945: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:48:06.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8884" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":44,"skipped":634,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:48:06.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:48:07.061: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.415304ms) Apr 8 23:48:07.064: INFO: (1) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.238947ms) Apr 8 23:48:07.067: INFO: (2) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.211781ms) Apr 8 23:48:07.070: INFO: (3) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.06109ms) Apr 8 23:48:07.074: INFO: (4) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.115527ms) Apr 8 23:48:07.077: INFO: (5) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.061718ms) Apr 8 23:48:07.079: INFO: (6) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.686001ms) Apr 8 23:48:07.100: INFO: (7) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 20.799194ms) Apr 8 23:48:07.103: INFO: (8) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.870471ms) Apr 8 23:48:07.107: INFO: (9) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.467387ms) Apr 8 23:48:07.110: INFO: (10) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.233114ms) Apr 8 23:48:07.113: INFO: (11) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.179203ms) Apr 8 23:48:07.116: INFO: (12) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.916798ms) Apr 8 23:48:07.120: INFO: (13) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.450081ms) Apr 8 23:48:07.122: INFO: (14) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.841247ms) Apr 8 23:48:07.125: INFO: (15) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.94102ms) Apr 8 23:48:07.128: INFO: (16) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.776524ms) Apr 8 23:48:07.131: INFO: (17) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.707955ms) Apr 8 23:48:07.134: INFO: (18) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.611377ms) Apr 8 23:48:07.137: INFO: (19) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.797368ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:48:07.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2297" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":45,"skipped":675,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:48:07.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 8 23:48:07.222: INFO: Waiting up to 5m0s for pod "client-containers-8de1f500-c1c9-4745-8fce-54173ae8fd11" in namespace "containers-8842" to be "Succeeded or Failed" Apr 8 23:48:07.224: INFO: Pod "client-containers-8de1f500-c1c9-4745-8fce-54173ae8fd11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260825ms Apr 8 23:48:09.228: INFO: Pod "client-containers-8de1f500-c1c9-4745-8fce-54173ae8fd11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006153668s Apr 8 23:48:11.231: INFO: Pod "client-containers-8de1f500-c1c9-4745-8fce-54173ae8fd11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009521556s STEP: Saw pod success Apr 8 23:48:11.231: INFO: Pod "client-containers-8de1f500-c1c9-4745-8fce-54173ae8fd11" satisfied condition "Succeeded or Failed" Apr 8 23:48:11.234: INFO: Trying to get logs from node latest-worker pod client-containers-8de1f500-c1c9-4745-8fce-54173ae8fd11 container test-container: STEP: delete the pod Apr 8 23:48:11.286: INFO: Waiting for pod client-containers-8de1f500-c1c9-4745-8fce-54173ae8fd11 to disappear Apr 8 23:48:11.297: INFO: Pod client-containers-8de1f500-c1c9-4745-8fce-54173ae8fd11 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:48:11.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8842" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":691,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:48:11.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:48:18.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1254" for this suite. • [SLOW TEST:7.062 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":47,"skipped":730,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:48:18.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6513.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6513.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 23:48:24.487: INFO: DNS probes using dns-test-c4cf401f-37ba-41f4-b854-df9298c1abaf succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6513.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6513.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 23:48:30.602: INFO: File wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local from pod dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 23:48:30.609: INFO: File jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local from pod dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 23:48:30.609: INFO: Lookups using dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 failed for: [wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local] Apr 8 23:48:35.615: INFO: File wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local from pod dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 23:48:35.618: INFO: File jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local from pod dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 23:48:35.619: INFO: Lookups using dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 failed for: [wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local] Apr 8 23:48:40.614: INFO: File wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local from pod dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 23:48:40.618: INFO: File jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local from pod dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 23:48:40.618: INFO: Lookups using dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 failed for: [wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local] Apr 8 23:48:45.615: INFO: File wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local from pod dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 23:48:45.619: INFO: File jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local from pod dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 23:48:45.619: INFO: Lookups using dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 failed for: [wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local] Apr 8 23:48:50.614: INFO: File wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local from pod dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 23:48:50.618: INFO: File jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local from pod dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 23:48:50.618: INFO: Lookups using dns-6513/dns-test-6a60cb80-9e19-477e-8708-d87b62185623 failed for: [wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local] Apr 8 23:48:55.618: INFO: DNS probes using dns-test-6a60cb80-9e19-477e-8708-d87b62185623 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6513.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6513.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6513.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6513.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 23:49:02.236: INFO: DNS probes using dns-test-b767bbd4-8c69-45d7-84be-0456d888221d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:49:02.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6513" for this suite. • [SLOW TEST:43.931 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":48,"skipped":749,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:49:02.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 23:49:03.330: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 23:49:05.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986543, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986543, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986543, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986543, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 23:49:08.391: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:49:08.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7401-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:49:09.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8230" for this suite. STEP: Destroying namespace "webhook-8230-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.263 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":49,"skipped":767,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:49:09.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 8 23:49:09.629: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 23:49:09.647: INFO: Waiting for terminating namespaces to be deleted... Apr 8 23:49:09.650: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 8 23:49:09.657: INFO: sample-webhook-deployment-6cc9cc9dc-2fvnp from webhook-8230 started at 2020-04-08 23:49:03 +0000 UTC (1 container statuses recorded) Apr 8 23:49:09.657: INFO: Container sample-webhook ready: true, restart count 0 Apr 8 23:49:09.657: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:49:09.657: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 23:49:09.657: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:49:09.657: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 23:49:09.657: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 8 23:49:09.662: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:49:09.662: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 23:49:09.662: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 8 23:49:09.662: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1603fd1280e71889], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1603fd1281754b49], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:49:10.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5014" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":50,"skipped":778,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:49:10.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 8 23:49:10.743: INFO: Waiting up to 5m0s for pod "downward-api-0821c8b1-4988-4c93-b3a2-7014aedf9dad" in namespace "downward-api-5022" to be "Succeeded or Failed" Apr 8 23:49:10.759: INFO: Pod "downward-api-0821c8b1-4988-4c93-b3a2-7014aedf9dad": Phase="Pending", Reason="", readiness=false. Elapsed: 16.125692ms Apr 8 23:49:12.767: INFO: Pod "downward-api-0821c8b1-4988-4c93-b3a2-7014aedf9dad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023624896s Apr 8 23:49:14.771: INFO: Pod "downward-api-0821c8b1-4988-4c93-b3a2-7014aedf9dad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027324278s STEP: Saw pod success Apr 8 23:49:14.771: INFO: Pod "downward-api-0821c8b1-4988-4c93-b3a2-7014aedf9dad" satisfied condition "Succeeded or Failed" Apr 8 23:49:14.773: INFO: Trying to get logs from node latest-worker2 pod downward-api-0821c8b1-4988-4c93-b3a2-7014aedf9dad container dapi-container: STEP: delete the pod Apr 8 23:49:14.802: INFO: Waiting for pod downward-api-0821c8b1-4988-4c93-b3a2-7014aedf9dad to disappear Apr 8 23:49:14.814: INFO: Pod downward-api-0821c8b1-4988-4c93-b3a2-7014aedf9dad no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:49:14.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5022" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":807,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:49:14.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 8 23:49:17.961: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:49:18.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3640" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":813,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:49:18.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 23:49:18.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9316310-fc2f-40bd-bada-de390ff8c931" in namespace "downward-api-1240" to be "Succeeded or Failed" Apr 8 23:49:18.237: INFO: Pod "downwardapi-volume-f9316310-fc2f-40bd-bada-de390ff8c931": Phase="Pending", Reason="", readiness=false. Elapsed: 17.282383ms Apr 8 23:49:20.241: INFO: Pod "downwardapi-volume-f9316310-fc2f-40bd-bada-de390ff8c931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021540842s Apr 8 23:49:22.245: INFO: Pod "downwardapi-volume-f9316310-fc2f-40bd-bada-de390ff8c931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02573183s STEP: Saw pod success Apr 8 23:49:22.245: INFO: Pod "downwardapi-volume-f9316310-fc2f-40bd-bada-de390ff8c931" satisfied condition "Succeeded or Failed" Apr 8 23:49:22.248: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f9316310-fc2f-40bd-bada-de390ff8c931 container client-container: STEP: delete the pod Apr 8 23:49:22.268: INFO: Waiting for pod downwardapi-volume-f9316310-fc2f-40bd-bada-de390ff8c931 to disappear Apr 8 23:49:22.272: INFO: Pod downwardapi-volume-f9316310-fc2f-40bd-bada-de390ff8c931 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:49:22.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1240" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":828,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:49:22.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-477cda44-1127-4210-b937-99a6e29a7175 in namespace container-probe-1128 Apr 8 23:49:26.364: INFO: Started pod busybox-477cda44-1127-4210-b937-99a6e29a7175 in namespace container-probe-1128 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 23:49:26.367: INFO: Initial restart count of pod busybox-477cda44-1127-4210-b937-99a6e29a7175 is 0 Apr 8 23:50:12.471: INFO: Restart count of pod container-probe-1128/busybox-477cda44-1127-4210-b937-99a6e29a7175 is now 1 (46.104030468s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:50:12.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1128" for this suite. • [SLOW TEST:50.234 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":839,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:50:12.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 8 23:50:12.611: INFO: Waiting up to 5m0s for pod "pod-da91d440-da4a-42a1-bbed-ecf6373243ab" in namespace "emptydir-6338" to be "Succeeded or Failed" Apr 8 23:50:12.614: INFO: Pod "pod-da91d440-da4a-42a1-bbed-ecf6373243ab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.102689ms Apr 8 23:50:14.618: INFO: Pod "pod-da91d440-da4a-42a1-bbed-ecf6373243ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007259344s Apr 8 23:50:16.622: INFO: Pod "pod-da91d440-da4a-42a1-bbed-ecf6373243ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011250303s STEP: Saw pod success Apr 8 23:50:16.622: INFO: Pod "pod-da91d440-da4a-42a1-bbed-ecf6373243ab" satisfied condition "Succeeded or Failed" Apr 8 23:50:16.624: INFO: Trying to get logs from node latest-worker2 pod pod-da91d440-da4a-42a1-bbed-ecf6373243ab container test-container: STEP: delete the pod Apr 8 23:50:16.658: INFO: Waiting for pod pod-da91d440-da4a-42a1-bbed-ecf6373243ab to disappear Apr 8 23:50:16.668: INFO: Pod pod-da91d440-da4a-42a1-bbed-ecf6373243ab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:50:16.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6338" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":874,"failed":0} SS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:50:16.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:50:16.769: INFO: Creating ReplicaSet my-hostname-basic-cbe0144f-bbc6-425b-a47a-e4d27be794ec Apr 8 23:50:16.777: INFO: Pod name my-hostname-basic-cbe0144f-bbc6-425b-a47a-e4d27be794ec: Found 0 pods out of 1 Apr 8 23:50:21.781: INFO: Pod name my-hostname-basic-cbe0144f-bbc6-425b-a47a-e4d27be794ec: Found 1 pods out of 1 Apr 8 23:50:21.781: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-cbe0144f-bbc6-425b-a47a-e4d27be794ec" is running Apr 8 23:50:21.783: INFO: Pod "my-hostname-basic-cbe0144f-bbc6-425b-a47a-e4d27be794ec-67289" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 23:50:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 23:50:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 23:50:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 23:50:16 +0000 UTC Reason: Message:}]) Apr 8 23:50:21.783: INFO: Trying to dial the pod Apr 8 23:50:26.795: INFO: Controller my-hostname-basic-cbe0144f-bbc6-425b-a47a-e4d27be794ec: Got expected result from replica 1 [my-hostname-basic-cbe0144f-bbc6-425b-a47a-e4d27be794ec-67289]: "my-hostname-basic-cbe0144f-bbc6-425b-a47a-e4d27be794ec-67289", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:50:26.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7343" for this suite. • [SLOW TEST:10.127 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":56,"skipped":876,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:50:26.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:50:26.882: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:50:33.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-246" for this suite. • [SLOW TEST:6.338 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":57,"skipped":891,"failed":0} [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:50:33.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:50:33.209: INFO: Creating deployment "webserver-deployment" Apr 8 23:50:33.214: INFO: Waiting for observed generation 1 Apr 8 23:50:35.324: INFO: Waiting for all required pods to come up Apr 8 23:50:35.328: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 8 23:50:43.337: INFO: Waiting for deployment "webserver-deployment" to complete Apr 8 23:50:43.344: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 8 23:50:43.350: INFO: Updating deployment webserver-deployment Apr 8 23:50:43.350: INFO: Waiting for observed generation 2 Apr 8 23:50:45.359: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 8 23:50:45.361: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 8 23:50:45.363: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 8 23:50:45.372: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 8 23:50:45.372: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 8 23:50:45.375: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 8 23:50:45.379: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 8 23:50:45.379: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 8 23:50:45.407: INFO: Updating deployment webserver-deployment Apr 8 23:50:45.407: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 8 23:50:45.464: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 8 23:50:45.482: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 8 23:50:45.734: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5476 /apis/apps/v1/namespaces/deployment-5476/deployments/webserver-deployment 2bd40382-d363-4418-a4a9-81d894a0abf8 6536381 3 2020-04-08 23:50:33 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e95868 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-08 23:50:43 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-08 23:50:45 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 8 23:50:45.783: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5476 /apis/apps/v1/namespaces/deployment-5476/replicasets/webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 6536419 3 2020-04-08 23:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 2bd40382-d363-4418-a4a9-81d894a0abf8 0xc0035daa27 0xc0035daa28}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035daa98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 23:50:45.783: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 8 23:50:45.783: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5476 /apis/apps/v1/namespaces/deployment-5476/replicasets/webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 6536423 3 2020-04-08 23:50:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2bd40382-d363-4418-a4a9-81d894a0abf8 0xc0035da967 0xc0035da968}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035da9c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 8 23:50:45.871: INFO: Pod "webserver-deployment-595b5b9587-48px8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-48px8 webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-48px8 26a51577-c464-4cff-bf36-278e8546eb5c 6536400 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ec257 0xc0035ec258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.872: INFO: Pod "webserver-deployment-595b5b9587-5dvdg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5dvdg webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-5dvdg a0f21ddc-5ed8-4f18-8912-8f4c87f19f0f 6536434 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ec377 0xc0035ec378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-08 23:50:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.872: INFO: Pod "webserver-deployment-595b5b9587-67tp8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-67tp8 webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-67tp8 b1798e8e-b372-49c4-a5e3-a94e8d9b3603 6536418 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ec4d7 0xc0035ec4d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.872: INFO: Pod "webserver-deployment-595b5b9587-7l96c" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7l96c webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-7l96c 8182c72a-be76-4c0a-9ec4-42f81227203f 6536437 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ec5f7 0xc0035ec5f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-08 23:50:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.872: INFO: Pod "webserver-deployment-595b5b9587-9nmw7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9nmw7 webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-9nmw7 a5ecaac5-ae5d-477d-94b4-ebf5ce54e53f 6536292 0 2020-04-08 23:50:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ec757 0xc0035ec758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.109,StartTime:2020-04-08 23:50:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 23:50:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e60df9722a63f0324c26e496a7643c09a7a9a886a65b26d00c7eab1986a8d709,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.109,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.872: INFO: Pod "webserver-deployment-595b5b9587-bmkz6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bmkz6 webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-bmkz6 0a2c0ce4-963a-473e-b879-31f18303db92 6536298 0 2020-04-08 23:50:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ec8d7 0xc0035ec8d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.69,StartTime:2020-04-08 23:50:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 23:50:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b4c1c29728c993df40b90f6901a2e8cb8fcff2d99e404848687ea608bb2b8f73,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.872: INFO: Pod "webserver-deployment-595b5b9587-fxc4v" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fxc4v webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-fxc4v 71460c60-c0f0-4290-8d31-9398b8ece669 6536401 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035eca57 0xc0035eca58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.873: INFO: Pod "webserver-deployment-595b5b9587-g6d9q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g6d9q webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-g6d9q e75e076f-eb28-47cf-a1f1-57b026bf7350 6536421 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ecb77 0xc0035ecb78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-08 23:50:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.873: INFO: Pod "webserver-deployment-595b5b9587-hjvzn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hjvzn webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-hjvzn 4ab14656-bc55-4f71-aecb-ca3a21bcb308 6536276 0 2020-04-08 23:50:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035eccd7 0xc0035eccd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.68,StartTime:2020-04-08 23:50:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 23:50:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9c2f118183c6f7ba5853d11084a000a6a7071e403fda7fbc3a7cc2c0ddf7244b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.873: INFO: Pod "webserver-deployment-595b5b9587-jmml9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jmml9 webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-jmml9 b2b31ebb-e5a1-45e6-94e9-444597d7905d 6536262 0 2020-04-08 23:50:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ece57 0xc0035ece58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.66,StartTime:2020-04-08 23:50:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 23:50:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5a2fcbdb6dfd42bffd8a8c5e6d4b81ce33851c0daab15a5fc9d5a691123eab4d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.873: INFO: Pod "webserver-deployment-595b5b9587-kbghh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kbghh webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-kbghh fc5cbbe8-6ad8-4cf0-9e8a-2509eaf59a4e 6536253 0 2020-04-08 23:50:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ecfd7 0xc0035ecfd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.107,StartTime:2020-04-08 23:50:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 23:50:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1a18f6f73005e2816f34cc87ce327c73172dad6f4fda662c47a66ca47ded1456,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.107,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.873: INFO: Pod "webserver-deployment-595b5b9587-l8vqn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l8vqn webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-l8vqn 74dda2fe-7265-4ce5-9548-2528fa079d59 6536286 0 2020-04-08 23:50:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ed157 0xc0035ed158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.110,StartTime:2020-04-08 23:50:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 23:50:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7653169a1d03ee6470b1aa35a656be7a00ed163a305ec4475272ccc7ba3f13a5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.110,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.874: INFO: Pod "webserver-deployment-595b5b9587-lknhq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lknhq webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-lknhq 416ec0fc-25b8-43fe-a835-1cdeabd20c62 6536414 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ed2d7 0xc0035ed2d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.874: INFO: Pod "webserver-deployment-595b5b9587-rn9v8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rn9v8 webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-rn9v8 90e6ae8d-9d2a-4a0c-b64d-009cabe74590 6536417 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ed3f7 0xc0035ed3f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.874: INFO: Pod "webserver-deployment-595b5b9587-rx6nm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rx6nm webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-rx6nm 5ed72f0c-b040-47d2-adc1-a715675e8999 6536415 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ed517 0xc0035ed518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.874: INFO: Pod "webserver-deployment-595b5b9587-sh4xs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sh4xs webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-sh4xs ccbaaa98-18cc-48c2-9c91-5f5b47b06bd1 6536394 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ed637 0xc0035ed638}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.874: INFO: Pod "webserver-deployment-595b5b9587-tlc6d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tlc6d webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-tlc6d 76f434a5-913a-409d-b455-f189375153a5 6536410 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ed757 0xc0035ed758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.874: INFO: Pod "webserver-deployment-595b5b9587-ttm2c" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ttm2c webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-ttm2c 9b9b36e2-92c5-49be-b01a-5b07b97da4d2 6536236 0 2020-04-08 23:50:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ed877 0xc0035ed878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.65,StartTime:2020-04-08 23:50:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 23:50:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6a4143368704be147bb54c9c558d0c266ede386e4c76162c523a24e93de19a9e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.875: INFO: Pod "webserver-deployment-595b5b9587-xn82b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xn82b webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-xn82b 2d93721e-f64c-4a22-90ab-5e6d94ed59e4 6536393 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035ed9f7 0xc0035ed9f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.875: INFO: Pod "webserver-deployment-595b5b9587-ztfzq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ztfzq webserver-deployment-595b5b9587- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-595b5b9587-ztfzq 2e295381-831a-4afd-b4ea-00d35b452429 6536232 0 2020-04-08 23:50:33 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 35ad7da3-0521-4bc4-b269-b9ff087b8c85 0xc0035edb17 0xc0035edb18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.106,StartTime:2020-04-08 23:50:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 23:50:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6320a661561a5d89c4bf57013f616fca563943f13ea29345c51cba570466b915,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.106,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.875: INFO: Pod "webserver-deployment-c7997dcc8-299vd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-299vd webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-299vd ee1814bb-8a6f-485e-95af-13a3fe227246 6536412 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0035edc97 0xc0035edc98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.875: INFO: Pod "webserver-deployment-c7997dcc8-2mdp9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2mdp9 webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-2mdp9 637b5473-695c-46e0-8bf9-1fb0302a476e 6536382 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0035eddc7 0xc0035eddc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.875: INFO: Pod "webserver-deployment-c7997dcc8-5chcx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5chcx webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-5chcx 33539059-8860-456a-b0d1-b7228e4c57e5 6536328 0 2020-04-08 23:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0035edef7 0xc0035edef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-08 23:50:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.875: INFO: Pod "webserver-deployment-c7997dcc8-5fx8d" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5fx8d webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-5fx8d 571a529e-26a7-48c7-a60d-7898be06ac0b 6536420 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0036ae077 0xc0036ae078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.875: INFO: Pod "webserver-deployment-c7997dcc8-9swkk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9swkk webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-9swkk 3a95ac32-76c1-46d1-80c9-70dc8ee940f6 6536348 0 2020-04-08 23:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0036ae1a7 0xc0036ae1a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-08 23:50:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.876: INFO: Pod "webserver-deployment-c7997dcc8-g279k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g279k webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-g279k f4600d98-534d-46ee-9d4d-472f5f1868ee 6536339 0 2020-04-08 23:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0036ae327 0xc0036ae328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-08 23:50:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.876: INFO: Pod "webserver-deployment-c7997dcc8-gtftt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gtftt webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-gtftt 71e2703d-706b-4277-9b8f-30bd7eb3868a 6536384 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0036ae4a7 0xc0036ae4a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.876: INFO: Pod "webserver-deployment-c7997dcc8-gttlm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gttlm webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-gttlm 602daea0-f473-4e99-9cc5-7728841526ce 6536413 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0036ae5d7 0xc0036ae5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.876: INFO: Pod "webserver-deployment-c7997dcc8-jjlrh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jjlrh webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-jjlrh a27c790e-c8e2-4479-88da-659cd8484cc6 6536416 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0036ae707 0xc0036ae708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.876: INFO: Pod "webserver-deployment-c7997dcc8-t7vwf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t7vwf webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-t7vwf 508b5218-e67e-47c3-8eb5-6bab76ec7745 6536323 0 2020-04-08 23:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0036ae837 0xc0036ae838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-08 23:50:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.876: INFO: Pod "webserver-deployment-c7997dcc8-wcfkp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wcfkp webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-wcfkp 0a5dfcb3-a03e-45f2-89ec-09b7a10e695c 6536350 0 2020-04-08 23:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0036ae9b7 0xc0036ae9b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-08 23:50:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.877: INFO: Pod "webserver-deployment-c7997dcc8-xnzz6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xnzz6 webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-xnzz6 177a2071-df27-4bba-b259-3fa4555382c0 6536411 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0036aeb37 0xc0036aeb38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 23:50:45.877: INFO: Pod "webserver-deployment-c7997dcc8-zkq54" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zkq54 webserver-deployment-c7997dcc8- deployment-5476 /api/v1/namespaces/deployment-5476/pods/webserver-deployment-c7997dcc8-zkq54 58c4144b-4fcd-470e-aa64-55585eff98b8 6536395 0 2020-04-08 23:50:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 04df337c-2604-4754-a338-15bac4ed28f3 0xc0036aec67 0xc0036aec68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jf78v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jf78v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jf78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 23:50:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:50:45.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5476" for this suite. • [SLOW TEST:12.958 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":58,"skipped":891,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:50:46.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 8 23:50:46.360: INFO: namespace kubectl-463 Apr 8 23:50:46.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-463' Apr 8 23:50:46.678: INFO: stderr: "" Apr 8 23:50:46.678: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 8 23:50:48.026: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:48.026: INFO: Found 0 / 1 Apr 8 23:50:48.859: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:48.859: INFO: Found 0 / 1 Apr 8 23:50:49.683: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:49.683: INFO: Found 0 / 1 Apr 8 23:50:51.079: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:51.079: INFO: Found 0 / 1 Apr 8 23:50:51.707: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:51.707: INFO: Found 0 / 1 Apr 8 23:50:53.025: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:53.025: INFO: Found 0 / 1 Apr 8 23:50:53.737: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:53.737: INFO: Found 0 / 1 Apr 8 23:50:54.726: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:54.726: INFO: Found 0 / 1 Apr 8 23:50:55.918: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:55.918: INFO: Found 0 / 1 Apr 8 23:50:56.844: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:56.844: INFO: Found 0 / 1 Apr 8 23:50:57.772: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:57.772: INFO: Found 0 / 1 Apr 8 23:50:58.936: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:50:58.936: INFO: Found 0 / 1 Apr 8 23:51:00.130: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:51:00.130: INFO: Found 0 / 1 Apr 8 23:51:00.719: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:51:00.719: INFO: Found 0 / 1 Apr 8 23:51:01.720: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:51:01.720: INFO: Found 1 / 1 Apr 8 23:51:01.720: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 8 23:51:01.743: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 23:51:01.743: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 8 23:51:01.743: INFO: wait on agnhost-master startup in kubectl-463 Apr 8 23:51:01.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-6h7nq agnhost-master --namespace=kubectl-463' Apr 8 23:51:02.089: INFO: stderr: "" Apr 8 23:51:02.089: INFO: stdout: "Paused\n" STEP: exposing RC Apr 8 23:51:02.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-463' Apr 8 23:51:02.395: INFO: stderr: "" Apr 8 23:51:02.395: INFO: stdout: "service/rm2 exposed\n" Apr 8 23:51:02.432: INFO: Service rm2 in namespace kubectl-463 found. STEP: exposing service Apr 8 23:51:04.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-463' Apr 8 23:51:04.623: INFO: stderr: "" Apr 8 23:51:04.623: INFO: stdout: "service/rm3 exposed\n" Apr 8 23:51:04.629: INFO: Service rm3 in namespace kubectl-463 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:51:06.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-463" for this suite. • [SLOW TEST:20.543 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":59,"skipped":897,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:51:06.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 8 23:51:06.714: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:51:13.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-975" for this suite. • [SLOW TEST:7.151 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":60,"skipped":939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:51:13.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-a17ba31a-aec3-4aba-b39f-a5b293bc286d STEP: Creating a pod to test consume configMaps Apr 8 23:51:13.868: INFO: Waiting up to 5m0s for pod "pod-configmaps-375f14f3-fcf7-4e68-879c-507b34243c83" in namespace "configmap-7517" to be "Succeeded or Failed" Apr 8 23:51:13.884: INFO: Pod "pod-configmaps-375f14f3-fcf7-4e68-879c-507b34243c83": Phase="Pending", Reason="", readiness=false. Elapsed: 15.694214ms Apr 8 23:51:15.888: INFO: Pod "pod-configmaps-375f14f3-fcf7-4e68-879c-507b34243c83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020089405s Apr 8 23:51:17.893: INFO: Pod "pod-configmaps-375f14f3-fcf7-4e68-879c-507b34243c83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024211621s STEP: Saw pod success Apr 8 23:51:17.893: INFO: Pod "pod-configmaps-375f14f3-fcf7-4e68-879c-507b34243c83" satisfied condition "Succeeded or Failed" Apr 8 23:51:17.896: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-375f14f3-fcf7-4e68-879c-507b34243c83 container configmap-volume-test: STEP: delete the pod Apr 8 23:51:17.915: INFO: Waiting for pod pod-configmaps-375f14f3-fcf7-4e68-879c-507b34243c83 to disappear Apr 8 23:51:17.934: INFO: Pod pod-configmaps-375f14f3-fcf7-4e68-879c-507b34243c83 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:51:17.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7517" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":1011,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:51:17.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-a1058c27-7614-4eb2-bb53-9159870b6916 STEP: Creating a pod to test consume secrets Apr 8 23:51:18.025: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ff68e1cb-2bbf-4217-a045-bb11e00b0197" in namespace "projected-5592" to be "Succeeded or Failed" Apr 8 23:51:18.030: INFO: Pod "pod-projected-secrets-ff68e1cb-2bbf-4217-a045-bb11e00b0197": Phase="Pending", Reason="", readiness=false. Elapsed: 4.646832ms Apr 8 23:51:20.034: INFO: Pod "pod-projected-secrets-ff68e1cb-2bbf-4217-a045-bb11e00b0197": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008414163s Apr 8 23:51:22.038: INFO: Pod "pod-projected-secrets-ff68e1cb-2bbf-4217-a045-bb11e00b0197": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012320278s STEP: Saw pod success Apr 8 23:51:22.038: INFO: Pod "pod-projected-secrets-ff68e1cb-2bbf-4217-a045-bb11e00b0197" satisfied condition "Succeeded or Failed" Apr 8 23:51:22.040: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-ff68e1cb-2bbf-4217-a045-bb11e00b0197 container projected-secret-volume-test: STEP: delete the pod Apr 8 23:51:22.082: INFO: Waiting for pod pod-projected-secrets-ff68e1cb-2bbf-4217-a045-bb11e00b0197 to disappear Apr 8 23:51:22.133: INFO: Pod pod-projected-secrets-ff68e1cb-2bbf-4217-a045-bb11e00b0197 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:51:22.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5592" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1019,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:51:22.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 8 23:51:22.190: INFO: Waiting up to 5m0s for pod "var-expansion-b8e183a5-bc13-4b2a-8944-d17df15d21db" in namespace "var-expansion-5805" to be "Succeeded or Failed" Apr 8 23:51:22.192: INFO: Pod "var-expansion-b8e183a5-bc13-4b2a-8944-d17df15d21db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.442835ms Apr 8 23:51:24.195: INFO: Pod "var-expansion-b8e183a5-bc13-4b2a-8944-d17df15d21db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005526703s Apr 8 23:51:26.200: INFO: Pod "var-expansion-b8e183a5-bc13-4b2a-8944-d17df15d21db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010604424s STEP: Saw pod success Apr 8 23:51:26.200: INFO: Pod "var-expansion-b8e183a5-bc13-4b2a-8944-d17df15d21db" satisfied condition "Succeeded or Failed" Apr 8 23:51:26.204: INFO: Trying to get logs from node latest-worker2 pod var-expansion-b8e183a5-bc13-4b2a-8944-d17df15d21db container dapi-container: STEP: delete the pod Apr 8 23:51:26.223: INFO: Waiting for pod var-expansion-b8e183a5-bc13-4b2a-8944-d17df15d21db to disappear Apr 8 23:51:26.228: INFO: Pod var-expansion-b8e183a5-bc13-4b2a-8944-d17df15d21db no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:51:26.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5805" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1030,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:51:26.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:51:30.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7938" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1042,"failed":0} S ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:51:30.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 8 23:51:30.479: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 8 23:51:30.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3048' Apr 8 23:51:30.751: INFO: stderr: "" Apr 8 23:51:30.751: INFO: stdout: "service/agnhost-slave created\n" Apr 8 23:51:30.751: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 8 23:51:30.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3048' Apr 8 23:51:30.998: INFO: stderr: "" Apr 8 23:51:30.998: INFO: stdout: "service/agnhost-master created\n" Apr 8 23:51:30.998: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 8 23:51:30.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3048' Apr 8 23:51:31.312: INFO: stderr: "" Apr 8 23:51:31.312: INFO: stdout: "service/frontend created\n" Apr 8 23:51:31.312: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 8 23:51:31.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3048' Apr 8 23:51:31.570: INFO: stderr: "" Apr 8 23:51:31.570: INFO: stdout: "deployment.apps/frontend created\n" Apr 8 23:51:31.571: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 8 23:51:31.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3048' Apr 8 23:51:31.860: INFO: stderr: "" Apr 8 23:51:31.860: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 8 23:51:31.860: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 8 23:51:31.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3048' Apr 8 23:51:32.121: INFO: stderr: "" Apr 8 23:51:32.121: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 8 23:51:32.121: INFO: Waiting for all frontend pods to be Running. Apr 8 23:51:37.172: INFO: Waiting for frontend to serve content. Apr 8 23:51:38.219: INFO: Trying to add a new entry to the guestbook. Apr 8 23:51:38.235: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 8 23:51:38.243: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3048' Apr 8 23:51:38.364: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 23:51:38.364: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 8 23:51:38.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3048' Apr 8 23:51:38.508: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 23:51:38.508: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 8 23:51:38.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3048' Apr 8 23:51:38.625: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 23:51:38.625: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 8 23:51:38.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3048' Apr 8 23:51:38.731: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 23:51:38.731: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 8 23:51:38.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3048' Apr 8 23:51:38.841: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 23:51:38.841: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 8 23:51:38.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3048' Apr 8 23:51:38.961: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 23:51:38.961: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:51:38.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3048" for this suite. • [SLOW TEST:8.572 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":65,"skipped":1043,"failed":0} [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:51:38.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 8 23:51:43.671: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-902 pod-service-account-ba74fcf7-9004-49d7-b8b2-65643830bcb8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 8 23:51:43.894: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-902 pod-service-account-ba74fcf7-9004-49d7-b8b2-65643830bcb8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 8 23:51:44.086: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-902 pod-service-account-ba74fcf7-9004-49d7-b8b2-65643830bcb8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:51:44.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-902" for this suite. • [SLOW TEST:5.417 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":66,"skipped":1043,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:51:44.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 23:51:45.272: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 23:51:47.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986705, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986705, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986705, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986705, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 23:51:50.312: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:51:50.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5739-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:51:51.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1909" for this suite. STEP: Destroying namespace "webhook-1909-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.066 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":67,"skipped":1055,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:51:51.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:52:08.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1005" for this suite. • [SLOW TEST:17.162 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":68,"skipped":1101,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:52:08.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-cba26b55-787d-4ea4-a5aa-f6b36c10b194 STEP: Creating a pod to test consume configMaps Apr 8 23:52:08.732: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ca3bb4d-8848-4cbe-bb0b-92b2de694854" in namespace "projected-7908" to be "Succeeded or Failed" Apr 8 23:52:08.780: INFO: Pod "pod-projected-configmaps-2ca3bb4d-8848-4cbe-bb0b-92b2de694854": Phase="Pending", Reason="", readiness=false. Elapsed: 48.278476ms Apr 8 23:52:10.784: INFO: Pod "pod-projected-configmaps-2ca3bb4d-8848-4cbe-bb0b-92b2de694854": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052102346s Apr 8 23:52:12.789: INFO: Pod "pod-projected-configmaps-2ca3bb4d-8848-4cbe-bb0b-92b2de694854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056921816s STEP: Saw pod success Apr 8 23:52:12.789: INFO: Pod "pod-projected-configmaps-2ca3bb4d-8848-4cbe-bb0b-92b2de694854" satisfied condition "Succeeded or Failed" Apr 8 23:52:12.792: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2ca3bb4d-8848-4cbe-bb0b-92b2de694854 container projected-configmap-volume-test: STEP: delete the pod Apr 8 23:52:12.810: INFO: Waiting for pod pod-projected-configmaps-2ca3bb4d-8848-4cbe-bb0b-92b2de694854 to disappear Apr 8 23:52:12.814: INFO: Pod pod-projected-configmaps-2ca3bb4d-8848-4cbe-bb0b-92b2de694854 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:52:12.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7908" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1101,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:52:12.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1568 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 8 23:52:12.905: INFO: Found 0 stateful pods, waiting for 3 Apr 8 23:52:22.997: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 23:52:22.997: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 23:52:22.997: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 8 23:52:32.910: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 23:52:32.910: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 23:52:32.910: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 8 23:52:32.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1568 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 23:52:33.182: INFO: stderr: "I0408 23:52:33.055591 646 log.go:172] (0xc000c2e000) (0xc00058a000) Create stream\nI0408 23:52:33.055645 646 log.go:172] (0xc000c2e000) (0xc00058a000) Stream added, broadcasting: 1\nI0408 23:52:33.059010 646 log.go:172] (0xc000c2e000) Reply frame received for 1\nI0408 23:52:33.059093 646 log.go:172] (0xc000c2e000) (0xc00047a000) Create stream\nI0408 23:52:33.059127 646 log.go:172] (0xc000c2e000) (0xc00047a000) Stream added, broadcasting: 3\nI0408 23:52:33.060183 646 log.go:172] (0xc000c2e000) Reply frame received for 3\nI0408 23:52:33.060227 646 log.go:172] (0xc000c2e000) (0xc00058a140) Create stream\nI0408 23:52:33.060241 646 log.go:172] (0xc000c2e000) (0xc00058a140) Stream added, broadcasting: 5\nI0408 23:52:33.061279 646 log.go:172] (0xc000c2e000) Reply frame received for 5\nI0408 23:52:33.144949 646 log.go:172] (0xc000c2e000) Data frame received for 5\nI0408 23:52:33.144996 646 log.go:172] (0xc00058a140) (5) Data frame handling\nI0408 23:52:33.145029 646 log.go:172] (0xc00058a140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 23:52:33.175114 646 log.go:172] (0xc000c2e000) Data frame received for 3\nI0408 23:52:33.175137 646 log.go:172] (0xc00047a000) (3) Data frame handling\nI0408 23:52:33.175145 646 log.go:172] (0xc00047a000) (3) Data frame sent\nI0408 23:52:33.175150 646 log.go:172] (0xc000c2e000) Data frame received for 3\nI0408 23:52:33.175155 646 log.go:172] (0xc00047a000) (3) Data frame handling\nI0408 23:52:33.175407 646 log.go:172] (0xc000c2e000) Data frame received for 5\nI0408 23:52:33.175417 646 log.go:172] (0xc00058a140) (5) Data frame handling\nI0408 23:52:33.177433 646 log.go:172] (0xc000c2e000) Data frame received for 1\nI0408 23:52:33.177465 646 log.go:172] (0xc00058a000) (1) Data frame handling\nI0408 23:52:33.177486 646 log.go:172] (0xc00058a000) (1) Data frame sent\nI0408 23:52:33.177505 646 log.go:172] (0xc000c2e000) (0xc00058a000) Stream removed, broadcasting: 1\nI0408 23:52:33.177528 646 log.go:172] (0xc000c2e000) Go away received\nI0408 23:52:33.177902 646 log.go:172] (0xc000c2e000) (0xc00058a000) Stream removed, broadcasting: 1\nI0408 23:52:33.177915 646 log.go:172] (0xc000c2e000) (0xc00047a000) Stream removed, broadcasting: 3\nI0408 23:52:33.177921 646 log.go:172] (0xc000c2e000) (0xc00058a140) Stream removed, broadcasting: 5\n" Apr 8 23:52:33.182: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 23:52:33.182: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 8 23:52:43.220: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 8 23:52:53.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1568 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 23:52:53.466: INFO: stderr: "I0408 23:52:53.395036 668 log.go:172] (0xc0009f8790) (0xc0008d21e0) Create stream\nI0408 23:52:53.395104 668 log.go:172] (0xc0009f8790) (0xc0008d21e0) Stream added, broadcasting: 1\nI0408 23:52:53.398536 668 log.go:172] (0xc0009f8790) Reply frame received for 1\nI0408 23:52:53.398574 668 log.go:172] (0xc0009f8790) (0xc000920000) Create stream\nI0408 23:52:53.398584 668 log.go:172] (0xc0009f8790) (0xc000920000) Stream added, broadcasting: 3\nI0408 23:52:53.399737 668 log.go:172] (0xc0009f8790) Reply frame received for 3\nI0408 23:52:53.399779 668 log.go:172] (0xc0009f8790) (0xc0006e9400) Create stream\nI0408 23:52:53.399794 668 log.go:172] (0xc0009f8790) (0xc0006e9400) Stream added, broadcasting: 5\nI0408 23:52:53.400816 668 log.go:172] (0xc0009f8790) Reply frame received for 5\nI0408 23:52:53.461060 668 log.go:172] (0xc0009f8790) Data frame received for 3\nI0408 23:52:53.461126 668 log.go:172] (0xc000920000) (3) Data frame handling\nI0408 23:52:53.461288 668 log.go:172] (0xc000920000) (3) Data frame sent\nI0408 23:52:53.461336 668 log.go:172] (0xc0009f8790) Data frame received for 3\nI0408 23:52:53.461362 668 log.go:172] (0xc000920000) (3) Data frame handling\nI0408 23:52:53.461611 668 log.go:172] (0xc0009f8790) Data frame received for 5\nI0408 23:52:53.461641 668 log.go:172] (0xc0006e9400) (5) Data frame handling\nI0408 23:52:53.461704 668 log.go:172] (0xc0006e9400) (5) Data frame sent\nI0408 23:52:53.461728 668 log.go:172] (0xc0009f8790) Data frame received for 5\nI0408 23:52:53.461759 668 log.go:172] (0xc0006e9400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 23:52:53.463015 668 log.go:172] (0xc0009f8790) Data frame received for 1\nI0408 23:52:53.463032 668 log.go:172] (0xc0008d21e0) (1) Data frame handling\nI0408 23:52:53.463043 668 log.go:172] (0xc0008d21e0) (1) Data frame sent\nI0408 23:52:53.463053 668 log.go:172] (0xc0009f8790) (0xc0008d21e0) Stream removed, broadcasting: 1\nI0408 23:52:53.463061 668 log.go:172] (0xc0009f8790) Go away received\nI0408 23:52:53.463325 668 log.go:172] (0xc0009f8790) (0xc0008d21e0) Stream removed, broadcasting: 1\nI0408 23:52:53.463341 668 log.go:172] (0xc0009f8790) (0xc000920000) Stream removed, broadcasting: 3\nI0408 23:52:53.463348 668 log.go:172] (0xc0009f8790) (0xc0006e9400) Stream removed, broadcasting: 5\n" Apr 8 23:52:53.466: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 23:52:53.466: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 23:53:13.489: INFO: Waiting for StatefulSet statefulset-1568/ss2 to complete update STEP: Rolling back to a previous revision Apr 8 23:53:23.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1568 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 23:53:23.778: INFO: stderr: "I0408 23:53:23.622337 689 log.go:172] (0xc00003a420) (0xc000544640) Create stream\nI0408 23:53:23.622381 689 log.go:172] (0xc00003a420) (0xc000544640) Stream added, broadcasting: 1\nI0408 23:53:23.625334 689 log.go:172] (0xc00003a420) Reply frame received for 1\nI0408 23:53:23.625413 689 log.go:172] (0xc00003a420) (0xc000928000) Create stream\nI0408 23:53:23.625430 689 log.go:172] (0xc00003a420) (0xc000928000) Stream added, broadcasting: 3\nI0408 23:53:23.626449 689 log.go:172] (0xc00003a420) Reply frame received for 3\nI0408 23:53:23.626491 689 log.go:172] (0xc00003a420) (0xc0008e8000) Create stream\nI0408 23:53:23.626508 689 log.go:172] (0xc00003a420) (0xc0008e8000) Stream added, broadcasting: 5\nI0408 23:53:23.627608 689 log.go:172] (0xc00003a420) Reply frame received for 5\nI0408 23:53:23.714532 689 log.go:172] (0xc00003a420) Data frame received for 5\nI0408 23:53:23.714556 689 log.go:172] (0xc0008e8000) (5) Data frame handling\nI0408 23:53:23.714565 689 log.go:172] (0xc0008e8000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 23:53:23.770412 689 log.go:172] (0xc00003a420) Data frame received for 3\nI0408 23:53:23.770462 689 log.go:172] (0xc000928000) (3) Data frame handling\nI0408 23:53:23.770496 689 log.go:172] (0xc000928000) (3) Data frame sent\nI0408 23:53:23.770747 689 log.go:172] (0xc00003a420) Data frame received for 3\nI0408 23:53:23.770789 689 log.go:172] (0xc00003a420) Data frame received for 5\nI0408 23:53:23.770940 689 log.go:172] (0xc0008e8000) (5) Data frame handling\nI0408 23:53:23.770971 689 log.go:172] (0xc000928000) (3) Data frame handling\nI0408 23:53:23.772721 689 log.go:172] (0xc00003a420) Data frame received for 1\nI0408 23:53:23.772737 689 log.go:172] (0xc000544640) (1) Data frame handling\nI0408 23:53:23.772746 689 log.go:172] (0xc000544640) (1) Data frame sent\nI0408 23:53:23.772843 689 log.go:172] (0xc00003a420) (0xc000544640) Stream removed, broadcasting: 1\nI0408 23:53:23.773015 689 log.go:172] (0xc00003a420) Go away received\nI0408 23:53:23.773308 689 log.go:172] (0xc00003a420) (0xc000544640) Stream removed, broadcasting: 1\nI0408 23:53:23.773323 689 log.go:172] (0xc00003a420) (0xc000928000) Stream removed, broadcasting: 3\nI0408 23:53:23.773329 689 log.go:172] (0xc00003a420) (0xc0008e8000) Stream removed, broadcasting: 5\n" Apr 8 23:53:23.778: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 23:53:23.778: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 23:53:33.808: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 8 23:53:43.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1568 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 23:53:44.085: INFO: stderr: "I0408 23:53:43.972901 709 log.go:172] (0xc0000f2bb0) (0xc000a1c000) Create stream\nI0408 23:53:43.972968 709 log.go:172] (0xc0000f2bb0) (0xc000a1c000) Stream added, broadcasting: 1\nI0408 23:53:43.981995 709 log.go:172] (0xc0000f2bb0) Reply frame received for 1\nI0408 23:53:43.982040 709 log.go:172] (0xc0000f2bb0) (0xc000ab8000) Create stream\nI0408 23:53:43.982058 709 log.go:172] (0xc0000f2bb0) (0xc000ab8000) Stream added, broadcasting: 3\nI0408 23:53:43.989800 709 log.go:172] (0xc0000f2bb0) Reply frame received for 3\nI0408 23:53:43.989838 709 log.go:172] (0xc0000f2bb0) (0xc000a1c0a0) Create stream\nI0408 23:53:43.989850 709 log.go:172] (0xc0000f2bb0) (0xc000a1c0a0) Stream added, broadcasting: 5\nI0408 23:53:43.991064 709 log.go:172] (0xc0000f2bb0) Reply frame received for 5\nI0408 23:53:44.077679 709 log.go:172] (0xc0000f2bb0) Data frame received for 3\nI0408 23:53:44.077719 709 log.go:172] (0xc000ab8000) (3) Data frame handling\nI0408 23:53:44.077742 709 log.go:172] (0xc000ab8000) (3) Data frame sent\nI0408 23:53:44.077888 709 log.go:172] (0xc0000f2bb0) Data frame received for 5\nI0408 23:53:44.077903 709 log.go:172] (0xc000a1c0a0) (5) Data frame handling\nI0408 23:53:44.077910 709 log.go:172] (0xc000a1c0a0) (5) Data frame sent\nI0408 23:53:44.077914 709 log.go:172] (0xc0000f2bb0) Data frame received for 5\nI0408 23:53:44.077919 709 log.go:172] (0xc000a1c0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 23:53:44.077961 709 log.go:172] (0xc0000f2bb0) Data frame received for 3\nI0408 23:53:44.078001 709 log.go:172] (0xc000ab8000) (3) Data frame handling\nI0408 23:53:44.079946 709 log.go:172] (0xc0000f2bb0) Data frame received for 1\nI0408 23:53:44.079975 709 log.go:172] (0xc000a1c000) (1) Data frame handling\nI0408 23:53:44.079996 709 log.go:172] (0xc000a1c000) (1) Data frame sent\nI0408 23:53:44.080013 709 log.go:172] (0xc0000f2bb0) (0xc000a1c000) Stream removed, broadcasting: 1\nI0408 23:53:44.080125 709 log.go:172] (0xc0000f2bb0) Go away received\nI0408 23:53:44.080401 709 log.go:172] (0xc0000f2bb0) (0xc000a1c000) Stream removed, broadcasting: 1\nI0408 23:53:44.080421 709 log.go:172] (0xc0000f2bb0) (0xc000ab8000) Stream removed, broadcasting: 3\nI0408 23:53:44.080434 709 log.go:172] (0xc0000f2bb0) (0xc000a1c0a0) Stream removed, broadcasting: 5\n" Apr 8 23:53:44.085: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 23:53:44.085: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 23:53:54.112: INFO: Waiting for StatefulSet statefulset-1568/ss2 to complete update Apr 8 23:53:54.112: INFO: Waiting for Pod statefulset-1568/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 8 23:53:54.112: INFO: Waiting for Pod statefulset-1568/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 8 23:54:04.119: INFO: Waiting for StatefulSet statefulset-1568/ss2 to complete update Apr 8 23:54:04.119: INFO: Waiting for Pod statefulset-1568/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 8 23:54:14.139: INFO: Waiting for StatefulSet statefulset-1568/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 8 23:54:24.121: INFO: Deleting all statefulset in ns statefulset-1568 Apr 8 23:54:24.124: INFO: Scaling statefulset ss2 to 0 Apr 8 23:54:54.139: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 23:54:54.142: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:54:54.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1568" for this suite. • [SLOW TEST:161.354 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":70,"skipped":1109,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:54:54.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2456 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 8 23:54:54.252: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 8 23:54:54.299: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 8 23:54:56.302: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 8 23:54:58.304: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 23:55:00.303: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 23:55:02.304: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 23:55:04.304: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 23:55:06.303: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 8 23:55:08.303: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 8 23:55:08.309: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 8 23:55:10.313: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 8 23:55:12.314: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 8 23:55:14.314: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 8 23:55:16.313: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 8 23:55:20.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.95:8080/dial?request=hostname&protocol=http&host=10.244.2.94&port=8080&tries=1'] Namespace:pod-network-test-2456 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 23:55:20.338: INFO: >>> kubeConfig: /root/.kube/config I0408 23:55:20.367796 7 log.go:172] (0xc00479e160) (0xc0018d70e0) Create stream I0408 23:55:20.367846 7 log.go:172] (0xc00479e160) (0xc0018d70e0) Stream added, broadcasting: 1 I0408 23:55:20.369865 7 log.go:172] (0xc00479e160) Reply frame received for 1 I0408 23:55:20.369917 7 log.go:172] (0xc00479e160) (0xc0013628c0) Create stream I0408 23:55:20.369931 7 log.go:172] (0xc00479e160) (0xc0013628c0) Stream added, broadcasting: 3 I0408 23:55:20.370796 7 log.go:172] (0xc00479e160) Reply frame received for 3 I0408 23:55:20.370829 7 log.go:172] (0xc00479e160) (0xc001362a00) Create stream I0408 23:55:20.370844 7 log.go:172] (0xc00479e160) (0xc001362a00) Stream added, broadcasting: 5 I0408 23:55:20.371702 7 log.go:172] (0xc00479e160) Reply frame received for 5 I0408 23:55:20.473204 7 log.go:172] (0xc00479e160) Data frame received for 3 I0408 23:55:20.473267 7 log.go:172] (0xc0013628c0) (3) Data frame handling I0408 23:55:20.473281 7 log.go:172] (0xc0013628c0) (3) Data frame sent I0408 23:55:20.474064 7 log.go:172] (0xc00479e160) Data frame received for 3 I0408 23:55:20.474096 7 log.go:172] (0xc0013628c0) (3) Data frame handling I0408 23:55:20.474367 7 log.go:172] (0xc00479e160) Data frame received for 5 I0408 23:55:20.474405 7 log.go:172] (0xc001362a00) (5) Data frame handling I0408 23:55:20.476396 7 log.go:172] (0xc00479e160) Data frame received for 1 I0408 23:55:20.476437 7 log.go:172] (0xc0018d70e0) (1) Data frame handling I0408 23:55:20.476488 7 log.go:172] (0xc0018d70e0) (1) Data frame sent I0408 23:55:20.476506 7 log.go:172] (0xc00479e160) (0xc0018d70e0) Stream removed, broadcasting: 1 I0408 23:55:20.476522 7 log.go:172] (0xc00479e160) Go away received I0408 23:55:20.476685 7 log.go:172] (0xc00479e160) (0xc0018d70e0) Stream removed, broadcasting: 1 I0408 23:55:20.476742 7 log.go:172] (0xc00479e160) (0xc0013628c0) Stream removed, broadcasting: 3 I0408 23:55:20.476756 7 log.go:172] (0xc00479e160) (0xc001362a00) Stream removed, broadcasting: 5 Apr 8 23:55:20.476: INFO: Waiting for responses: map[] Apr 8 23:55:20.480: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.95:8080/dial?request=hostname&protocol=http&host=10.244.1.136&port=8080&tries=1'] Namespace:pod-network-test-2456 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 23:55:20.480: INFO: >>> kubeConfig: /root/.kube/config I0408 23:55:20.524206 7 log.go:172] (0xc002f12420) (0xc001362f00) Create stream I0408 23:55:20.524255 7 log.go:172] (0xc002f12420) (0xc001362f00) Stream added, broadcasting: 1 I0408 23:55:20.530392 7 log.go:172] (0xc002f12420) Reply frame received for 1 I0408 23:55:20.530427 7 log.go:172] (0xc002f12420) (0xc001bca820) Create stream I0408 23:55:20.530440 7 log.go:172] (0xc002f12420) (0xc001bca820) Stream added, broadcasting: 3 I0408 23:55:20.531165 7 log.go:172] (0xc002f12420) Reply frame received for 3 I0408 23:55:20.531202 7 log.go:172] (0xc002f12420) (0xc0018aa000) Create stream I0408 23:55:20.531217 7 log.go:172] (0xc002f12420) (0xc0018aa000) Stream added, broadcasting: 5 I0408 23:55:20.532547 7 log.go:172] (0xc002f12420) Reply frame received for 5 I0408 23:55:20.593432 7 log.go:172] (0xc002f12420) Data frame received for 3 I0408 23:55:20.593465 7 log.go:172] (0xc001bca820) (3) Data frame handling I0408 23:55:20.593486 7 log.go:172] (0xc001bca820) (3) Data frame sent I0408 23:55:20.594084 7 log.go:172] (0xc002f12420) Data frame received for 3 I0408 23:55:20.594111 7 log.go:172] (0xc001bca820) (3) Data frame handling I0408 23:55:20.594151 7 log.go:172] (0xc002f12420) Data frame received for 5 I0408 23:55:20.594193 7 log.go:172] (0xc0018aa000) (5) Data frame handling I0408 23:55:20.595599 7 log.go:172] (0xc002f12420) Data frame received for 1 I0408 23:55:20.595627 7 log.go:172] (0xc001362f00) (1) Data frame handling I0408 23:55:20.595646 7 log.go:172] (0xc001362f00) (1) Data frame sent I0408 23:55:20.595678 7 log.go:172] (0xc002f12420) (0xc001362f00) Stream removed, broadcasting: 1 I0408 23:55:20.595720 7 log.go:172] (0xc002f12420) Go away received I0408 23:55:20.595807 7 log.go:172] (0xc002f12420) (0xc001362f00) Stream removed, broadcasting: 1 I0408 23:55:20.595826 7 log.go:172] (0xc002f12420) (0xc001bca820) Stream removed, broadcasting: 3 I0408 23:55:20.595842 7 log.go:172] (0xc002f12420) (0xc0018aa000) Stream removed, broadcasting: 5 Apr 8 23:55:20.595: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:55:20.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2456" for this suite. • [SLOW TEST:26.427 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1122,"failed":0} S ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:55:20.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 8 23:55:20.650: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 8 23:55:21.347: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 8 23:55:23.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986921, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986921, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986921, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721986921, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 23:55:26.201: INFO: Waited 784.056628ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:55:27.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4483" for this suite. • [SLOW TEST:7.198 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":72,"skipped":1123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:55:27.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 8 23:55:28.085: INFO: Waiting up to 5m0s for pod "client-containers-6ecd3b4f-9a9f-4118-aeb2-dc8545bbd82d" in namespace "containers-1130" to be "Succeeded or Failed" Apr 8 23:55:28.130: INFO: Pod "client-containers-6ecd3b4f-9a9f-4118-aeb2-dc8545bbd82d": Phase="Pending", Reason="", readiness=false. Elapsed: 45.013256ms Apr 8 23:55:30.135: INFO: Pod "client-containers-6ecd3b4f-9a9f-4118-aeb2-dc8545bbd82d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049340963s Apr 8 23:55:32.139: INFO: Pod "client-containers-6ecd3b4f-9a9f-4118-aeb2-dc8545bbd82d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053420587s STEP: Saw pod success Apr 8 23:55:32.139: INFO: Pod "client-containers-6ecd3b4f-9a9f-4118-aeb2-dc8545bbd82d" satisfied condition "Succeeded or Failed" Apr 8 23:55:32.142: INFO: Trying to get logs from node latest-worker2 pod client-containers-6ecd3b4f-9a9f-4118-aeb2-dc8545bbd82d container test-container: STEP: delete the pod Apr 8 23:55:32.197: INFO: Waiting for pod client-containers-6ecd3b4f-9a9f-4118-aeb2-dc8545bbd82d to disappear Apr 8 23:55:32.203: INFO: Pod client-containers-6ecd3b4f-9a9f-4118-aeb2-dc8545bbd82d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:55:32.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1130" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1175,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:55:32.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-3235ee22-7c2c-4d6e-bfc5-7d33dfc684b9 in namespace container-probe-4427 Apr 8 23:55:36.288: INFO: Started pod liveness-3235ee22-7c2c-4d6e-bfc5-7d33dfc684b9 in namespace container-probe-4427 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 23:55:36.290: INFO: Initial restart count of pod liveness-3235ee22-7c2c-4d6e-bfc5-7d33dfc684b9 is 0 Apr 8 23:56:00.362: INFO: Restart count of pod container-probe-4427/liveness-3235ee22-7c2c-4d6e-bfc5-7d33dfc684b9 is now 1 (24.071519383s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:56:00.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4427" for this suite. • [SLOW TEST:28.209 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1197,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:56:00.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1400, will wait for the garbage collector to delete the pods Apr 8 23:56:04.549: INFO: Deleting Job.batch foo took: 6.45103ms Apr 8 23:56:04.849: INFO: Terminating Job.batch foo pods took: 300.291592ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:56:43.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1400" for this suite. • [SLOW TEST:42.643 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":75,"skipped":1209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:56:43.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:56:43.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5031" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1235,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:56:43.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-7585 STEP: creating replication controller nodeport-test in namespace services-7585 I0408 23:56:43.288594 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7585, replica count: 2 I0408 23:56:46.339078 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 23:56:49.339368 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 8 23:56:49.339: INFO: Creating new exec pod Apr 8 23:56:54.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7585 execpodfjmfz -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 8 23:56:57.070: INFO: stderr: "I0408 23:56:56.975382 729 log.go:172] (0xc0008b6160) (0xc0008fe1e0) Create stream\nI0408 23:56:56.975424 729 log.go:172] (0xc0008b6160) (0xc0008fe1e0) Stream added, broadcasting: 1\nI0408 23:56:56.978543 729 log.go:172] (0xc0008b6160) Reply frame received for 1\nI0408 23:56:56.978582 729 log.go:172] (0xc0008b6160) (0xc0008c80a0) Create stream\nI0408 23:56:56.978594 729 log.go:172] (0xc0008b6160) (0xc0008c80a0) Stream added, broadcasting: 3\nI0408 23:56:56.979944 729 log.go:172] (0xc0008b6160) Reply frame received for 3\nI0408 23:56:56.979987 729 log.go:172] (0xc0008b6160) (0xc00081f360) Create stream\nI0408 23:56:56.980002 729 log.go:172] (0xc0008b6160) (0xc00081f360) Stream added, broadcasting: 5\nI0408 23:56:56.981062 729 log.go:172] (0xc0008b6160) Reply frame received for 5\nI0408 23:56:57.061966 729 log.go:172] (0xc0008b6160) Data frame received for 5\nI0408 23:56:57.061993 729 log.go:172] (0xc00081f360) (5) Data frame handling\nI0408 23:56:57.062012 729 log.go:172] (0xc00081f360) (5) Data frame sent\nI0408 23:56:57.062023 729 log.go:172] (0xc0008b6160) Data frame received for 5\nI0408 23:56:57.062032 729 log.go:172] (0xc00081f360) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0408 23:56:57.062055 729 log.go:172] (0xc00081f360) (5) Data frame sent\nI0408 23:56:57.062535 729 log.go:172] (0xc0008b6160) Data frame received for 3\nI0408 23:56:57.062567 729 log.go:172] (0xc0008c80a0) (3) Data frame handling\nI0408 23:56:57.062907 729 log.go:172] (0xc0008b6160) Data frame received for 5\nI0408 23:56:57.062944 729 log.go:172] (0xc00081f360) (5) Data frame handling\nI0408 23:56:57.064600 729 log.go:172] (0xc0008b6160) Data frame received for 1\nI0408 23:56:57.064620 729 log.go:172] (0xc0008fe1e0) (1) Data frame handling\nI0408 23:56:57.064632 729 log.go:172] (0xc0008fe1e0) (1) Data frame sent\nI0408 23:56:57.064645 729 log.go:172] (0xc0008b6160) (0xc0008fe1e0) Stream removed, broadcasting: 1\nI0408 23:56:57.064943 729 log.go:172] (0xc0008b6160) Go away received\nI0408 23:56:57.065032 729 log.go:172] (0xc0008b6160) (0xc0008fe1e0) Stream removed, broadcasting: 1\nI0408 23:56:57.065048 729 log.go:172] (0xc0008b6160) (0xc0008c80a0) Stream removed, broadcasting: 3\nI0408 23:56:57.065059 729 log.go:172] (0xc0008b6160) (0xc00081f360) Stream removed, broadcasting: 5\n" Apr 8 23:56:57.070: INFO: stdout: "" Apr 8 23:56:57.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7585 execpodfjmfz -- /bin/sh -x -c nc -zv -t -w 2 10.96.228.183 80' Apr 8 23:56:57.265: INFO: stderr: "I0408 23:56:57.199314 763 log.go:172] (0xc000be6790) (0xc0009d6000) Create stream\nI0408 23:56:57.199362 763 log.go:172] (0xc000be6790) (0xc0009d6000) Stream added, broadcasting: 1\nI0408 23:56:57.202050 763 log.go:172] (0xc000be6790) Reply frame received for 1\nI0408 23:56:57.202115 763 log.go:172] (0xc000be6790) (0xc0006cb2c0) Create stream\nI0408 23:56:57.202137 763 log.go:172] (0xc000be6790) (0xc0006cb2c0) Stream added, broadcasting: 3\nI0408 23:56:57.203152 763 log.go:172] (0xc000be6790) Reply frame received for 3\nI0408 23:56:57.203199 763 log.go:172] (0xc000be6790) (0xc0003a2000) Create stream\nI0408 23:56:57.203222 763 log.go:172] (0xc000be6790) (0xc0003a2000) Stream added, broadcasting: 5\nI0408 23:56:57.204590 763 log.go:172] (0xc000be6790) Reply frame received for 5\nI0408 23:56:57.258603 763 log.go:172] (0xc000be6790) Data frame received for 5\nI0408 23:56:57.258637 763 log.go:172] (0xc0003a2000) (5) Data frame handling\nI0408 23:56:57.258659 763 log.go:172] (0xc0003a2000) (5) Data frame sent\nI0408 23:56:57.258670 763 log.go:172] (0xc000be6790) Data frame received for 5\nI0408 23:56:57.258692 763 log.go:172] (0xc0003a2000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.228.183 80\nConnection to 10.96.228.183 80 port [tcp/http] succeeded!\nI0408 23:56:57.258733 763 log.go:172] (0xc0003a2000) (5) Data frame sent\nI0408 23:56:57.259007 763 log.go:172] (0xc000be6790) Data frame received for 5\nI0408 23:56:57.259036 763 log.go:172] (0xc0003a2000) (5) Data frame handling\nI0408 23:56:57.259092 763 log.go:172] (0xc000be6790) Data frame received for 3\nI0408 23:56:57.259130 763 log.go:172] (0xc0006cb2c0) (3) Data frame handling\nI0408 23:56:57.260608 763 log.go:172] (0xc000be6790) Data frame received for 1\nI0408 23:56:57.260661 763 log.go:172] (0xc0009d6000) (1) Data frame handling\nI0408 23:56:57.260692 763 log.go:172] (0xc0009d6000) (1) Data frame sent\nI0408 23:56:57.260714 763 log.go:172] (0xc000be6790) (0xc0009d6000) Stream removed, broadcasting: 1\nI0408 23:56:57.260753 763 log.go:172] (0xc000be6790) Go away received\nI0408 23:56:57.261077 763 log.go:172] (0xc000be6790) (0xc0009d6000) Stream removed, broadcasting: 1\nI0408 23:56:57.261095 763 log.go:172] (0xc000be6790) (0xc0006cb2c0) Stream removed, broadcasting: 3\nI0408 23:56:57.261103 763 log.go:172] (0xc000be6790) (0xc0003a2000) Stream removed, broadcasting: 5\n" Apr 8 23:56:57.265: INFO: stdout: "" Apr 8 23:56:57.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7585 execpodfjmfz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30004' Apr 8 23:56:57.464: INFO: stderr: "I0408 23:56:57.399768 784 log.go:172] (0xc0009709a0) (0xc0006795e0) Create stream\nI0408 23:56:57.399843 784 log.go:172] (0xc0009709a0) (0xc0006795e0) Stream added, broadcasting: 1\nI0408 23:56:57.402278 784 log.go:172] (0xc0009709a0) Reply frame received for 1\nI0408 23:56:57.402299 784 log.go:172] (0xc0009709a0) (0xc0008ac000) Create stream\nI0408 23:56:57.402314 784 log.go:172] (0xc0009709a0) (0xc0008ac000) Stream added, broadcasting: 3\nI0408 23:56:57.403191 784 log.go:172] (0xc0009709a0) Reply frame received for 3\nI0408 23:56:57.403239 784 log.go:172] (0xc0009709a0) (0xc000679680) Create stream\nI0408 23:56:57.403256 784 log.go:172] (0xc0009709a0) (0xc000679680) Stream added, broadcasting: 5\nI0408 23:56:57.404080 784 log.go:172] (0xc0009709a0) Reply frame received for 5\nI0408 23:56:57.459222 784 log.go:172] (0xc0009709a0) Data frame received for 3\nI0408 23:56:57.459257 784 log.go:172] (0xc0008ac000) (3) Data frame handling\nI0408 23:56:57.459277 784 log.go:172] (0xc0009709a0) Data frame received for 5\nI0408 23:56:57.459284 784 log.go:172] (0xc000679680) (5) Data frame handling\nI0408 23:56:57.459293 784 log.go:172] (0xc000679680) (5) Data frame sent\nI0408 23:56:57.459300 784 log.go:172] (0xc0009709a0) Data frame received for 5\nI0408 23:56:57.459307 784 log.go:172] (0xc000679680) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30004\nConnection to 172.17.0.13 30004 port [tcp/30004] succeeded!\nI0408 23:56:57.460617 784 log.go:172] (0xc0009709a0) Data frame received for 1\nI0408 23:56:57.460702 784 log.go:172] (0xc0006795e0) (1) Data frame handling\nI0408 23:56:57.460773 784 log.go:172] (0xc0006795e0) (1) Data frame sent\nI0408 23:56:57.460827 784 log.go:172] (0xc0009709a0) (0xc0006795e0) Stream removed, broadcasting: 1\nI0408 23:56:57.460849 784 log.go:172] (0xc0009709a0) Go away received\nI0408 23:56:57.461491 784 log.go:172] (0xc0009709a0) (0xc0006795e0) Stream removed, broadcasting: 1\nI0408 23:56:57.461512 784 log.go:172] (0xc0009709a0) (0xc0008ac000) Stream removed, broadcasting: 3\nI0408 23:56:57.461522 784 log.go:172] (0xc0009709a0) (0xc000679680) Stream removed, broadcasting: 5\n" Apr 8 23:56:57.465: INFO: stdout: "" Apr 8 23:56:57.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7585 execpodfjmfz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30004' Apr 8 23:56:57.683: INFO: stderr: "I0408 23:56:57.602947 805 log.go:172] (0xc00096e160) (0xc0008252c0) Create stream\nI0408 23:56:57.603010 805 log.go:172] (0xc00096e160) (0xc0008252c0) Stream added, broadcasting: 1\nI0408 23:56:57.604850 805 log.go:172] (0xc00096e160) Reply frame received for 1\nI0408 23:56:57.604909 805 log.go:172] (0xc00096e160) (0xc00096a000) Create stream\nI0408 23:56:57.604929 805 log.go:172] (0xc00096e160) (0xc00096a000) Stream added, broadcasting: 3\nI0408 23:56:57.606097 805 log.go:172] (0xc00096e160) Reply frame received for 3\nI0408 23:56:57.606143 805 log.go:172] (0xc00096e160) (0xc00096a0a0) Create stream\nI0408 23:56:57.606154 805 log.go:172] (0xc00096e160) (0xc00096a0a0) Stream added, broadcasting: 5\nI0408 23:56:57.606875 805 log.go:172] (0xc00096e160) Reply frame received for 5\nI0408 23:56:57.676166 805 log.go:172] (0xc00096e160) Data frame received for 5\nI0408 23:56:57.676211 805 log.go:172] (0xc00096a0a0) (5) Data frame handling\nI0408 23:56:57.676242 805 log.go:172] (0xc00096a0a0) (5) Data frame sent\nI0408 23:56:57.676262 805 log.go:172] (0xc00096e160) Data frame received for 5\nI0408 23:56:57.676279 805 log.go:172] (0xc00096a0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30004\nConnection to 172.17.0.12 30004 port [tcp/30004] succeeded!\nI0408 23:56:57.676598 805 log.go:172] (0xc00096e160) Data frame received for 3\nI0408 23:56:57.676622 805 log.go:172] (0xc00096a000) (3) Data frame handling\nI0408 23:56:57.678100 805 log.go:172] (0xc00096e160) Data frame received for 1\nI0408 23:56:57.678123 805 log.go:172] (0xc0008252c0) (1) Data frame handling\nI0408 23:56:57.678134 805 log.go:172] (0xc0008252c0) (1) Data frame sent\nI0408 23:56:57.678152 805 log.go:172] (0xc00096e160) (0xc0008252c0) Stream removed, broadcasting: 1\nI0408 23:56:57.678177 805 log.go:172] (0xc00096e160) Go away received\nI0408 23:56:57.678617 805 log.go:172] (0xc00096e160) (0xc0008252c0) Stream removed, broadcasting: 1\nI0408 23:56:57.678639 805 log.go:172] (0xc00096e160) (0xc00096a000) Stream removed, broadcasting: 3\nI0408 23:56:57.678651 805 log.go:172] (0xc00096e160) (0xc00096a0a0) Stream removed, broadcasting: 5\n" Apr 8 23:56:57.683: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:56:57.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7585" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:14.494 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":77,"skipped":1242,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:56:57.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 8 23:56:58.601: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 8 23:57:00.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987018, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987018, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987018, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987018, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 23:57:03.683: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:57:03.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:57:04.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6868" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.556 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":78,"skipped":1243,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:57:05.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 8 23:57:05.303: INFO: Waiting up to 5m0s for pod "pod-405a2c27-c15b-4c03-8bf1-f5a053a8f791" in namespace "emptydir-7230" to be "Succeeded or Failed" Apr 8 23:57:05.332: INFO: Pod "pod-405a2c27-c15b-4c03-8bf1-f5a053a8f791": Phase="Pending", Reason="", readiness=false. Elapsed: 28.847026ms Apr 8 23:57:07.353: INFO: Pod "pod-405a2c27-c15b-4c03-8bf1-f5a053a8f791": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050329938s Apr 8 23:57:09.357: INFO: Pod "pod-405a2c27-c15b-4c03-8bf1-f5a053a8f791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054647839s STEP: Saw pod success Apr 8 23:57:09.357: INFO: Pod "pod-405a2c27-c15b-4c03-8bf1-f5a053a8f791" satisfied condition "Succeeded or Failed" Apr 8 23:57:09.360: INFO: Trying to get logs from node latest-worker2 pod pod-405a2c27-c15b-4c03-8bf1-f5a053a8f791 container test-container: STEP: delete the pod Apr 8 23:57:09.407: INFO: Waiting for pod pod-405a2c27-c15b-4c03-8bf1-f5a053a8f791 to disappear Apr 8 23:57:09.421: INFO: Pod pod-405a2c27-c15b-4c03-8bf1-f5a053a8f791 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:57:09.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7230" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1245,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:57:09.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-62fe81e3-9fa5-4a0a-8426-c9d3c8b2c119 STEP: Creating a pod to test consume configMaps Apr 8 23:57:09.509: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f3f7710-89a3-4553-b179-4f85e6cea8a7" in namespace "projected-6604" to be "Succeeded or Failed" Apr 8 23:57:09.511: INFO: Pod "pod-projected-configmaps-3f3f7710-89a3-4553-b179-4f85e6cea8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140828ms Apr 8 23:57:11.515: INFO: Pod "pod-projected-configmaps-3f3f7710-89a3-4553-b179-4f85e6cea8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005943401s Apr 8 23:57:13.519: INFO: Pod "pod-projected-configmaps-3f3f7710-89a3-4553-b179-4f85e6cea8a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009965091s STEP: Saw pod success Apr 8 23:57:13.519: INFO: Pod "pod-projected-configmaps-3f3f7710-89a3-4553-b179-4f85e6cea8a7" satisfied condition "Succeeded or Failed" Apr 8 23:57:13.522: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-3f3f7710-89a3-4553-b179-4f85e6cea8a7 container projected-configmap-volume-test: STEP: delete the pod Apr 8 23:57:13.570: INFO: Waiting for pod pod-projected-configmaps-3f3f7710-89a3-4553-b179-4f85e6cea8a7 to disappear Apr 8 23:57:13.577: INFO: Pod pod-projected-configmaps-3f3f7710-89a3-4553-b179-4f85e6cea8a7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:57:13.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6604" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:57:13.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:57:13.635: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.927707ms) Apr 8 23:57:13.642: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 6.398477ms) Apr 8 23:57:13.648: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 6.033788ms) Apr 8 23:57:13.653: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 5.26084ms) Apr 8 23:57:13.679: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 25.313105ms) Apr 8 23:57:13.694: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 15.75547ms) Apr 8 23:57:13.697: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.601568ms) Apr 8 23:57:13.699: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.189341ms) Apr 8 23:57:13.701: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.215097ms) Apr 8 23:57:13.704: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.555079ms) Apr 8 23:57:13.706: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.386456ms) Apr 8 23:57:13.709: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.229482ms) Apr 8 23:57:13.711: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.066839ms) Apr 8 23:57:13.713: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.303582ms) Apr 8 23:57:13.715: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.209645ms) Apr 8 23:57:13.718: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.456477ms) Apr 8 23:57:13.720: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.270331ms) Apr 8 23:57:13.723: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.594779ms) Apr 8 23:57:13.725: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.597523ms) Apr 8 23:57:13.728: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.465367ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:57:13.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4194" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":81,"skipped":1316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:57:13.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-07bafeaf-6617-4d99-a11f-b933ca603d31 STEP: Creating a pod to test consume secrets Apr 8 23:57:13.794: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6d3be33e-717e-4f82-97b1-7b1d7c46a743" in namespace "projected-888" to be "Succeeded or Failed" Apr 8 23:57:13.826: INFO: Pod "pod-projected-secrets-6d3be33e-717e-4f82-97b1-7b1d7c46a743": Phase="Pending", Reason="", readiness=false. Elapsed: 31.940478ms Apr 8 23:57:15.829: INFO: Pod "pod-projected-secrets-6d3be33e-717e-4f82-97b1-7b1d7c46a743": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035341867s Apr 8 23:57:17.833: INFO: Pod "pod-projected-secrets-6d3be33e-717e-4f82-97b1-7b1d7c46a743": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039497965s STEP: Saw pod success Apr 8 23:57:17.833: INFO: Pod "pod-projected-secrets-6d3be33e-717e-4f82-97b1-7b1d7c46a743" satisfied condition "Succeeded or Failed" Apr 8 23:57:17.836: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-6d3be33e-717e-4f82-97b1-7b1d7c46a743 container secret-volume-test: STEP: delete the pod Apr 8 23:57:17.893: INFO: Waiting for pod pod-projected-secrets-6d3be33e-717e-4f82-97b1-7b1d7c46a743 to disappear Apr 8 23:57:17.906: INFO: Pod pod-projected-secrets-6d3be33e-717e-4f82-97b1-7b1d7c46a743 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:57:17.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-888" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1401,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:57:17.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-9373/configmap-test-67889fce-f814-4d79-a495-a783681dcd84 STEP: Creating a pod to test consume configMaps Apr 8 23:57:17.999: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a11ab56-a992-4eaa-98e1-4d41d947e967" in namespace "configmap-9373" to be "Succeeded or Failed" Apr 8 23:57:18.002: INFO: Pod "pod-configmaps-9a11ab56-a992-4eaa-98e1-4d41d947e967": Phase="Pending", Reason="", readiness=false. Elapsed: 3.627536ms Apr 8 23:57:20.006: INFO: Pod "pod-configmaps-9a11ab56-a992-4eaa-98e1-4d41d947e967": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007823392s Apr 8 23:57:22.011: INFO: Pod "pod-configmaps-9a11ab56-a992-4eaa-98e1-4d41d947e967": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012085219s STEP: Saw pod success Apr 8 23:57:22.011: INFO: Pod "pod-configmaps-9a11ab56-a992-4eaa-98e1-4d41d947e967" satisfied condition "Succeeded or Failed" Apr 8 23:57:22.014: INFO: Trying to get logs from node latest-worker pod pod-configmaps-9a11ab56-a992-4eaa-98e1-4d41d947e967 container env-test: STEP: delete the pod Apr 8 23:57:22.033: INFO: Waiting for pod pod-configmaps-9a11ab56-a992-4eaa-98e1-4d41d947e967 to disappear Apr 8 23:57:22.050: INFO: Pod pod-configmaps-9a11ab56-a992-4eaa-98e1-4d41d947e967 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:57:22.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9373" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1413,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:57:22.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 8 23:57:22.152: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 8 23:57:27.158: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:57:27.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1227" for this suite. • [SLOW TEST:5.195 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":84,"skipped":1425,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:57:27.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:57:27.331: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 8 23:57:30.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-480 create -f -' Apr 8 23:57:33.791: INFO: stderr: "" Apr 8 23:57:33.791: INFO: stdout: "e2e-test-crd-publish-openapi-2178-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 8 23:57:33.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-480 delete e2e-test-crd-publish-openapi-2178-crds test-cr' Apr 8 23:57:33.915: INFO: stderr: "" Apr 8 23:57:33.915: INFO: stdout: "e2e-test-crd-publish-openapi-2178-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 8 23:57:33.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-480 apply -f -' Apr 8 23:57:34.152: INFO: stderr: "" Apr 8 23:57:34.152: INFO: stdout: "e2e-test-crd-publish-openapi-2178-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 8 23:57:34.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-480 delete e2e-test-crd-publish-openapi-2178-crds test-cr' Apr 8 23:57:34.246: INFO: stderr: "" Apr 8 23:57:34.247: INFO: stdout: "e2e-test-crd-publish-openapi-2178-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 8 23:57:34.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2178-crds' Apr 8 23:57:34.480: INFO: stderr: "" Apr 8 23:57:34.480: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2178-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:57:36.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-480" for this suite. • [SLOW TEST:9.136 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":85,"skipped":1473,"failed":0} [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:57:36.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 8 23:57:36.508: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-8575330f-8925-450d-a62b-5a984f9cc019" in namespace "security-context-test-9918" to be "Succeeded or Failed" Apr 8 23:57:36.512: INFO: Pod "busybox-privileged-false-8575330f-8925-450d-a62b-5a984f9cc019": Phase="Pending", Reason="", readiness=false. Elapsed: 3.981541ms Apr 8 23:57:38.518: INFO: Pod "busybox-privileged-false-8575330f-8925-450d-a62b-5a984f9cc019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010357745s Apr 8 23:57:40.524: INFO: Pod "busybox-privileged-false-8575330f-8925-450d-a62b-5a984f9cc019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015985143s Apr 8 23:57:40.524: INFO: Pod "busybox-privileged-false-8575330f-8925-450d-a62b-5a984f9cc019" satisfied condition "Succeeded or Failed" Apr 8 23:57:40.532: INFO: Got logs for pod "busybox-privileged-false-8575330f-8925-450d-a62b-5a984f9cc019": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:57:40.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9918" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1473,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:57:40.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 23:57:40.618: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02ead1a7-c0c6-402d-b69c-cb15c7d1eb8e" in namespace "downward-api-8147" to be "Succeeded or Failed" Apr 8 23:57:40.626: INFO: Pod "downwardapi-volume-02ead1a7-c0c6-402d-b69c-cb15c7d1eb8e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04479ms Apr 8 23:57:42.630: INFO: Pod "downwardapi-volume-02ead1a7-c0c6-402d-b69c-cb15c7d1eb8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011773648s Apr 8 23:57:44.634: INFO: Pod "downwardapi-volume-02ead1a7-c0c6-402d-b69c-cb15c7d1eb8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015842672s STEP: Saw pod success Apr 8 23:57:44.634: INFO: Pod "downwardapi-volume-02ead1a7-c0c6-402d-b69c-cb15c7d1eb8e" satisfied condition "Succeeded or Failed" Apr 8 23:57:44.637: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-02ead1a7-c0c6-402d-b69c-cb15c7d1eb8e container client-container: STEP: delete the pod Apr 8 23:57:44.672: INFO: Waiting for pod downwardapi-volume-02ead1a7-c0c6-402d-b69c-cb15c7d1eb8e to disappear Apr 8 23:57:44.703: INFO: Pod downwardapi-volume-02ead1a7-c0c6-402d-b69c-cb15c7d1eb8e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:57:44.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8147" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1493,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:57:44.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:57:55.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4559" for this suite. • [SLOW TEST:11.194 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":88,"skipped":1495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:57:55.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-2707a302-1296-4d1d-baf8-6bcd67a3aead STEP: Creating a pod to test consume configMaps Apr 8 23:57:55.987: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e52a3dd-2c0d-49b8-b0b7-4feb455163bf" in namespace "projected-1566" to be "Succeeded or Failed" Apr 8 23:57:55.991: INFO: Pod "pod-projected-configmaps-0e52a3dd-2c0d-49b8-b0b7-4feb455163bf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.509132ms Apr 8 23:57:57.995: INFO: Pod "pod-projected-configmaps-0e52a3dd-2c0d-49b8-b0b7-4feb455163bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007518641s Apr 8 23:57:59.999: INFO: Pod "pod-projected-configmaps-0e52a3dd-2c0d-49b8-b0b7-4feb455163bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011201553s STEP: Saw pod success Apr 8 23:57:59.999: INFO: Pod "pod-projected-configmaps-0e52a3dd-2c0d-49b8-b0b7-4feb455163bf" satisfied condition "Succeeded or Failed" Apr 8 23:58:00.001: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-0e52a3dd-2c0d-49b8-b0b7-4feb455163bf container projected-configmap-volume-test: STEP: delete the pod Apr 8 23:58:00.033: INFO: Waiting for pod pod-projected-configmaps-0e52a3dd-2c0d-49b8-b0b7-4feb455163bf to disappear Apr 8 23:58:00.045: INFO: Pod pod-projected-configmaps-0e52a3dd-2c0d-49b8-b0b7-4feb455163bf no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:58:00.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1566" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1519,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:58:00.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-8464bdaf-9878-47a6-94b9-0780118cb4a8 STEP: Creating a pod to test consume secrets Apr 8 23:58:00.167: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-afdde43e-5c98-4874-b0c6-b87d1379ea3a" in namespace "projected-1317" to be "Succeeded or Failed" Apr 8 23:58:00.171: INFO: Pod "pod-projected-secrets-afdde43e-5c98-4874-b0c6-b87d1379ea3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060065ms Apr 8 23:58:02.175: INFO: Pod "pod-projected-secrets-afdde43e-5c98-4874-b0c6-b87d1379ea3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008056819s Apr 8 23:58:04.179: INFO: Pod "pod-projected-secrets-afdde43e-5c98-4874-b0c6-b87d1379ea3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011782237s STEP: Saw pod success Apr 8 23:58:04.179: INFO: Pod "pod-projected-secrets-afdde43e-5c98-4874-b0c6-b87d1379ea3a" satisfied condition "Succeeded or Failed" Apr 8 23:58:04.181: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-afdde43e-5c98-4874-b0c6-b87d1379ea3a container projected-secret-volume-test: STEP: delete the pod Apr 8 23:58:04.215: INFO: Waiting for pod pod-projected-secrets-afdde43e-5c98-4874-b0c6-b87d1379ea3a to disappear Apr 8 23:58:04.219: INFO: Pod pod-projected-secrets-afdde43e-5c98-4874-b0c6-b87d1379ea3a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:58:04.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1317" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1526,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:58:04.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 8 23:58:04.383: INFO: Waiting up to 5m0s for pod "var-expansion-992090b7-91e6-41e9-9873-f2531c3f8000" in namespace "var-expansion-5208" to be "Succeeded or Failed" Apr 8 23:58:04.435: INFO: Pod "var-expansion-992090b7-91e6-41e9-9873-f2531c3f8000": Phase="Pending", Reason="", readiness=false. Elapsed: 51.248081ms Apr 8 23:58:06.443: INFO: Pod "var-expansion-992090b7-91e6-41e9-9873-f2531c3f8000": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059820028s Apr 8 23:58:08.447: INFO: Pod "var-expansion-992090b7-91e6-41e9-9873-f2531c3f8000": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064110015s STEP: Saw pod success Apr 8 23:58:08.448: INFO: Pod "var-expansion-992090b7-91e6-41e9-9873-f2531c3f8000" satisfied condition "Succeeded or Failed" Apr 8 23:58:08.450: INFO: Trying to get logs from node latest-worker2 pod var-expansion-992090b7-91e6-41e9-9873-f2531c3f8000 container dapi-container: STEP: delete the pod Apr 8 23:58:08.486: INFO: Waiting for pod var-expansion-992090b7-91e6-41e9-9873-f2531c3f8000 to disappear Apr 8 23:58:08.501: INFO: Pod var-expansion-992090b7-91e6-41e9-9873-f2531c3f8000 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:58:08.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5208" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1537,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:58:08.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 8 23:58:12.610: INFO: Pod pod-hostip-f0ec739e-6bd2-441c-b9ad-2ee7de668e8a has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:58:12.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6047" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1566,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:58:12.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 8 23:58:12.674: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5258dac2-1f47-4ad2-b4d1-bf104a40e932" in namespace "projected-7126" to be "Succeeded or Failed" Apr 8 23:58:12.680: INFO: Pod "downwardapi-volume-5258dac2-1f47-4ad2-b4d1-bf104a40e932": Phase="Pending", Reason="", readiness=false. Elapsed: 5.508476ms Apr 8 23:58:14.684: INFO: Pod "downwardapi-volume-5258dac2-1f47-4ad2-b4d1-bf104a40e932": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009233069s Apr 8 23:58:16.688: INFO: Pod "downwardapi-volume-5258dac2-1f47-4ad2-b4d1-bf104a40e932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013973154s STEP: Saw pod success Apr 8 23:58:16.688: INFO: Pod "downwardapi-volume-5258dac2-1f47-4ad2-b4d1-bf104a40e932" satisfied condition "Succeeded or Failed" Apr 8 23:58:16.692: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5258dac2-1f47-4ad2-b4d1-bf104a40e932 container client-container: STEP: delete the pod Apr 8 23:58:16.705: INFO: Waiting for pod downwardapi-volume-5258dac2-1f47-4ad2-b4d1-bf104a40e932 to disappear Apr 8 23:58:16.710: INFO: Pod downwardapi-volume-5258dac2-1f47-4ad2-b4d1-bf104a40e932 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:58:16.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7126" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:58:16.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 8 23:58:16.787: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 8 23:58:28.281: INFO: >>> kubeConfig: /root/.kube/config Apr 8 23:58:31.191: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:58:41.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3605" for this suite. • [SLOW TEST:24.952 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":94,"skipped":1603,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:58:41.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0408 23:59:03.790336 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 23:59:03.790: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:59:03.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2220" for this suite. • [SLOW TEST:22.118 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":95,"skipped":1631,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:59:03.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 8 23:59:03.870: INFO: Waiting up to 5m0s for pod "pod-ee55290e-c95e-4033-9471-21f2b777a66f" in namespace "emptydir-2034" to be "Succeeded or Failed" Apr 8 23:59:03.887: INFO: Pod "pod-ee55290e-c95e-4033-9471-21f2b777a66f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.293522ms Apr 8 23:59:05.891: INFO: Pod "pod-ee55290e-c95e-4033-9471-21f2b777a66f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021604181s Apr 8 23:59:07.896: INFO: Pod "pod-ee55290e-c95e-4033-9471-21f2b777a66f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025791076s STEP: Saw pod success Apr 8 23:59:07.896: INFO: Pod "pod-ee55290e-c95e-4033-9471-21f2b777a66f" satisfied condition "Succeeded or Failed" Apr 8 23:59:07.899: INFO: Trying to get logs from node latest-worker2 pod pod-ee55290e-c95e-4033-9471-21f2b777a66f container test-container: STEP: delete the pod Apr 8 23:59:07.960: INFO: Waiting for pod pod-ee55290e-c95e-4033-9471-21f2b777a66f to disappear Apr 8 23:59:07.968: INFO: Pod pod-ee55290e-c95e-4033-9471-21f2b777a66f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:59:07.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2034" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1631,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:59:07.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0408 23:59:48.314172 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 23:59:48.314: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:59:48.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6708" for this suite. • [SLOW TEST:40.346 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":97,"skipped":1653,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:59:48.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 8 23:59:48.380: INFO: Waiting up to 5m0s for pod "pod-d09b548a-750e-4df8-86c7-952a00743f45" in namespace "emptydir-6818" to be "Succeeded or Failed" Apr 8 23:59:48.409: INFO: Pod "pod-d09b548a-750e-4df8-86c7-952a00743f45": Phase="Pending", Reason="", readiness=false. Elapsed: 28.546341ms Apr 8 23:59:50.451: INFO: Pod "pod-d09b548a-750e-4df8-86c7-952a00743f45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07092642s Apr 8 23:59:52.456: INFO: Pod "pod-d09b548a-750e-4df8-86c7-952a00743f45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075728149s STEP: Saw pod success Apr 8 23:59:52.456: INFO: Pod "pod-d09b548a-750e-4df8-86c7-952a00743f45" satisfied condition "Succeeded or Failed" Apr 8 23:59:52.462: INFO: Trying to get logs from node latest-worker2 pod pod-d09b548a-750e-4df8-86c7-952a00743f45 container test-container: STEP: delete the pod Apr 8 23:59:52.481: INFO: Waiting for pod pod-d09b548a-750e-4df8-86c7-952a00743f45 to disappear Apr 8 23:59:52.517: INFO: Pod pod-d09b548a-750e-4df8-86c7-952a00743f45 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:59:52.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6818" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:59:52.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 8 23:59:57.214: INFO: Successfully updated pod "annotationupdate770892d8-e4f5-41f5-ac1e-8361af6b6980" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 8 23:59:59.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6375" for this suite. • [SLOW TEST:6.894 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1722,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 8 23:59:59.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:00:05.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8625" for this suite. STEP: Destroying namespace "nsdeletetest-9169" for this suite. Apr 9 00:00:05.731: INFO: Namespace nsdeletetest-9169 was already deleted STEP: Destroying namespace "nsdeletetest-1557" for this suite. • [SLOW TEST:6.315 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":100,"skipped":1755,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:00:05.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 9 00:00:05.789: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:00:20.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3302" for this suite. • [SLOW TEST:14.957 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":101,"skipped":1774,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:00:20.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:00:24.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6714" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1781,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:00:24.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0409 00:00:34.862304 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 9 00:00:34.862: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:00:34.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3944" for this suite. • [SLOW TEST:10.078 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":103,"skipped":1785,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:00:34.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:00:34.932: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 9 00:00:36.964: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:00:38.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3757" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":104,"skipped":1822,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:00:38.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-28185902-be00-4b64-a6bc-aea599addd81 STEP: Creating a pod to test consume configMaps Apr 9 00:00:38.454: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bcf503eb-f54c-4e70-999c-d326f2b5fe45" in namespace "projected-8673" to be "Succeeded or Failed" Apr 9 00:00:38.457: INFO: Pod "pod-projected-configmaps-bcf503eb-f54c-4e70-999c-d326f2b5fe45": Phase="Pending", Reason="", readiness=false. Elapsed: 3.290105ms Apr 9 00:00:40.468: INFO: Pod "pod-projected-configmaps-bcf503eb-f54c-4e70-999c-d326f2b5fe45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014216818s Apr 9 00:00:42.472: INFO: Pod "pod-projected-configmaps-bcf503eb-f54c-4e70-999c-d326f2b5fe45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018415319s STEP: Saw pod success Apr 9 00:00:42.472: INFO: Pod "pod-projected-configmaps-bcf503eb-f54c-4e70-999c-d326f2b5fe45" satisfied condition "Succeeded or Failed" Apr 9 00:00:42.476: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-bcf503eb-f54c-4e70-999c-d326f2b5fe45 container projected-configmap-volume-test: STEP: delete the pod Apr 9 00:00:42.493: INFO: Waiting for pod pod-projected-configmaps-bcf503eb-f54c-4e70-999c-d326f2b5fe45 to disappear Apr 9 00:00:42.516: INFO: Pod pod-projected-configmaps-bcf503eb-f54c-4e70-999c-d326f2b5fe45 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:00:42.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8673" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1822,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:00:42.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 9 00:00:42.609: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-874 /api/v1/namespaces/watch-874/configmaps/e2e-watch-test-watch-closed 500ec682-71a0-435d-a445-e914f0023c03 6540561 0 2020-04-09 00:00:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:00:42.609: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-874 /api/v1/namespaces/watch-874/configmaps/e2e-watch-test-watch-closed 500ec682-71a0-435d-a445-e914f0023c03 6540562 0 2020-04-09 00:00:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 9 00:00:42.625: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-874 /api/v1/namespaces/watch-874/configmaps/e2e-watch-test-watch-closed 500ec682-71a0-435d-a445-e914f0023c03 6540563 0 2020-04-09 00:00:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:00:42.625: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-874 /api/v1/namespaces/watch-874/configmaps/e2e-watch-test-watch-closed 500ec682-71a0-435d-a445-e914f0023c03 6540564 0 2020-04-09 00:00:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:00:42.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-874" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":106,"skipped":1826,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:00:42.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:00:42.724: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c121cd5-ad9e-4968-97cf-3f2ea25faede" in namespace "downward-api-3728" to be "Succeeded or Failed" Apr 9 00:00:42.728: INFO: Pod "downwardapi-volume-1c121cd5-ad9e-4968-97cf-3f2ea25faede": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240149ms Apr 9 00:00:44.733: INFO: Pod "downwardapi-volume-1c121cd5-ad9e-4968-97cf-3f2ea25faede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008930425s Apr 9 00:00:46.737: INFO: Pod "downwardapi-volume-1c121cd5-ad9e-4968-97cf-3f2ea25faede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013141595s STEP: Saw pod success Apr 9 00:00:46.737: INFO: Pod "downwardapi-volume-1c121cd5-ad9e-4968-97cf-3f2ea25faede" satisfied condition "Succeeded or Failed" Apr 9 00:00:46.741: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1c121cd5-ad9e-4968-97cf-3f2ea25faede container client-container: STEP: delete the pod Apr 9 00:00:46.778: INFO: Waiting for pod downwardapi-volume-1c121cd5-ad9e-4968-97cf-3f2ea25faede to disappear Apr 9 00:00:46.817: INFO: Pod downwardapi-volume-1c121cd5-ad9e-4968-97cf-3f2ea25faede no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:00:46.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3728" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1831,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:00:46.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2363 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 9 00:00:46.898: INFO: Found 0 stateful pods, waiting for 3 Apr 9 00:00:56.907: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 9 00:00:56.907: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 9 00:00:56.907: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 9 00:00:56.958: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 9 00:01:06.989: INFO: Updating stateful set ss2 Apr 9 00:01:07.027: INFO: Waiting for Pod statefulset-2363/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 9 00:01:17.149: INFO: Found 2 stateful pods, waiting for 3 Apr 9 00:01:27.154: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 9 00:01:27.154: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 9 00:01:27.154: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 9 00:01:27.178: INFO: Updating stateful set ss2 Apr 9 00:01:27.198: INFO: Waiting for Pod statefulset-2363/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 9 00:01:37.223: INFO: Updating stateful set ss2 Apr 9 00:01:37.251: INFO: Waiting for StatefulSet statefulset-2363/ss2 to complete update Apr 9 00:01:37.251: INFO: Waiting for Pod statefulset-2363/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 9 00:01:47.258: INFO: Deleting all statefulset in ns statefulset-2363 Apr 9 00:01:47.260: INFO: Scaling statefulset ss2 to 0 Apr 9 00:01:57.294: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 00:01:57.297: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:01:57.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2363" for this suite. • [SLOW TEST:70.494 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":108,"skipped":1831,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:01:57.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:01:57.442: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8b2852dc-7f5f-43db-a5fd-b807d54615d4", Controller:(*bool)(0xc00296f44a), BlockOwnerDeletion:(*bool)(0xc00296f44b)}} Apr 9 00:01:57.473: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a5263065-dd86-49bc-a9c9-35666bf36941", Controller:(*bool)(0xc0028d8372), BlockOwnerDeletion:(*bool)(0xc0028d8373)}} Apr 9 00:01:57.501: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"36942288-0674-4a10-8a6e-34c7ebcd3fd9", Controller:(*bool)(0xc002a236fa), BlockOwnerDeletion:(*bool)(0xc002a236fb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:02:02.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9798" for this suite. • [SLOW TEST:5.236 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":109,"skipped":1834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:02:02.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:02:37.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9545" for this suite. • [SLOW TEST:34.579 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1865,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:02:37.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-2503/configmap-test-acea8ebd-94fc-4b9f-aee6-837c008bd309 STEP: Creating a pod to test consume configMaps Apr 9 00:02:37.240: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f614ebf-d77d-4df7-9040-aa6a45a1da76" in namespace "configmap-2503" to be "Succeeded or Failed" Apr 9 00:02:37.245: INFO: Pod "pod-configmaps-2f614ebf-d77d-4df7-9040-aa6a45a1da76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329261ms Apr 9 00:02:39.249: INFO: Pod "pod-configmaps-2f614ebf-d77d-4df7-9040-aa6a45a1da76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008842667s Apr 9 00:02:41.254: INFO: Pod "pod-configmaps-2f614ebf-d77d-4df7-9040-aa6a45a1da76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013476209s STEP: Saw pod success Apr 9 00:02:41.254: INFO: Pod "pod-configmaps-2f614ebf-d77d-4df7-9040-aa6a45a1da76" satisfied condition "Succeeded or Failed" Apr 9 00:02:41.257: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-2f614ebf-d77d-4df7-9040-aa6a45a1da76 container env-test: STEP: delete the pod Apr 9 00:02:41.288: INFO: Waiting for pod pod-configmaps-2f614ebf-d77d-4df7-9040-aa6a45a1da76 to disappear Apr 9 00:02:41.293: INFO: Pod pod-configmaps-2f614ebf-d77d-4df7-9040-aa6a45a1da76 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:02:41.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2503" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1882,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:02:41.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 9 00:02:41.377: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3269" to be "Succeeded or Failed" Apr 9 00:02:41.403: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 25.826094ms Apr 9 00:02:43.406: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029295908s Apr 9 00:02:45.409: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032544507s STEP: Saw pod success Apr 9 00:02:45.410: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 9 00:02:45.412: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 9 00:02:45.588: INFO: Waiting for pod pod-host-path-test to disappear Apr 9 00:02:45.598: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:02:45.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3269" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1937,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:02:45.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 9 00:02:45.642: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 9 00:02:45.660: INFO: Waiting for terminating namespaces to be deleted... Apr 9 00:02:45.662: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 9 00:02:45.667: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 9 00:02:45.667: INFO: Container kube-proxy ready: true, restart count 0 Apr 9 00:02:45.667: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 9 00:02:45.667: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 00:02:45.667: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 9 00:02:45.672: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 9 00:02:45.672: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 00:02:45.672: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 9 00:02:45.672: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-624df65e-52f4-45e0-801e-ce567e217987 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-624df65e-52f4-45e0-801e-ce567e217987 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-624df65e-52f4-45e0-801e-ce567e217987 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:02:53.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9889" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.232 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":113,"skipped":1964,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:02:53.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-dbll STEP: Creating a pod to test atomic-volume-subpath Apr 9 00:02:53.946: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dbll" in namespace "subpath-260" to be "Succeeded or Failed" Apr 9 00:02:53.949: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Pending", Reason="", readiness=false. Elapsed: 3.625437ms Apr 9 00:02:55.977: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031252408s Apr 9 00:02:57.993: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Running", Reason="", readiness=true. Elapsed: 4.047049743s Apr 9 00:02:59.996: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Running", Reason="", readiness=true. Elapsed: 6.050058483s Apr 9 00:03:02.000: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Running", Reason="", readiness=true. Elapsed: 8.054075134s Apr 9 00:03:04.004: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Running", Reason="", readiness=true. Elapsed: 10.058350979s Apr 9 00:03:06.008: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Running", Reason="", readiness=true. Elapsed: 12.062364995s Apr 9 00:03:08.012: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Running", Reason="", readiness=true. Elapsed: 14.066255141s Apr 9 00:03:10.016: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Running", Reason="", readiness=true. Elapsed: 16.070118305s Apr 9 00:03:12.020: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Running", Reason="", readiness=true. Elapsed: 18.073829864s Apr 9 00:03:14.024: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Running", Reason="", readiness=true. Elapsed: 20.078169901s Apr 9 00:03:16.028: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Running", Reason="", readiness=true. Elapsed: 22.08233975s Apr 9 00:03:18.032: INFO: Pod "pod-subpath-test-projected-dbll": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.086327461s STEP: Saw pod success Apr 9 00:03:18.032: INFO: Pod "pod-subpath-test-projected-dbll" satisfied condition "Succeeded or Failed" Apr 9 00:03:18.035: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-dbll container test-container-subpath-projected-dbll: STEP: delete the pod Apr 9 00:03:18.056: INFO: Waiting for pod pod-subpath-test-projected-dbll to disappear Apr 9 00:03:18.060: INFO: Pod pod-subpath-test-projected-dbll no longer exists STEP: Deleting pod pod-subpath-test-projected-dbll Apr 9 00:03:18.060: INFO: Deleting pod "pod-subpath-test-projected-dbll" in namespace "subpath-260" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:03:18.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-260" for this suite. • [SLOW TEST:24.231 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":114,"skipped":1980,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:03:18.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 9 00:03:18.185: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5922 /api/v1/namespaces/watch-5922/configmaps/e2e-watch-test-label-changed 0c53af53-f02a-4c4f-a631-e89995cacb03 6541596 0 2020-04-09 00:03:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:03:18.185: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5922 /api/v1/namespaces/watch-5922/configmaps/e2e-watch-test-label-changed 0c53af53-f02a-4c4f-a631-e89995cacb03 6541597 0 2020-04-09 00:03:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:03:18.185: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5922 /api/v1/namespaces/watch-5922/configmaps/e2e-watch-test-label-changed 0c53af53-f02a-4c4f-a631-e89995cacb03 6541598 0 2020-04-09 00:03:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 9 00:03:28.223: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5922 /api/v1/namespaces/watch-5922/configmaps/e2e-watch-test-label-changed 0c53af53-f02a-4c4f-a631-e89995cacb03 6541640 0 2020-04-09 00:03:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:03:28.223: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5922 /api/v1/namespaces/watch-5922/configmaps/e2e-watch-test-label-changed 0c53af53-f02a-4c4f-a631-e89995cacb03 6541641 0 2020-04-09 00:03:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:03:28.223: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5922 /api/v1/namespaces/watch-5922/configmaps/e2e-watch-test-label-changed 0c53af53-f02a-4c4f-a631-e89995cacb03 6541642 0 2020-04-09 00:03:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:03:28.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5922" for this suite. • [SLOW TEST:10.180 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":115,"skipped":2001,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:03:28.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:03:28.897: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:03:30.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987408, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987408, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987408, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987408, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:03:33.957: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:03:34.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4142" for this suite. STEP: Destroying namespace "webhook-4142-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.842 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":116,"skipped":2012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:03:34.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:04:34.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3880" for this suite. • [SLOW TEST:60.104 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":2043,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:04:34.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 9 00:04:34.286: INFO: Waiting up to 5m0s for pod "pod-0a21803e-7058-400c-b841-8f6996d8e513" in namespace "emptydir-7563" to be "Succeeded or Failed" Apr 9 00:04:34.329: INFO: Pod "pod-0a21803e-7058-400c-b841-8f6996d8e513": Phase="Pending", Reason="", readiness=false. Elapsed: 43.457366ms Apr 9 00:04:36.334: INFO: Pod "pod-0a21803e-7058-400c-b841-8f6996d8e513": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047738558s Apr 9 00:04:38.338: INFO: Pod "pod-0a21803e-7058-400c-b841-8f6996d8e513": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052311462s STEP: Saw pod success Apr 9 00:04:38.338: INFO: Pod "pod-0a21803e-7058-400c-b841-8f6996d8e513" satisfied condition "Succeeded or Failed" Apr 9 00:04:38.341: INFO: Trying to get logs from node latest-worker pod pod-0a21803e-7058-400c-b841-8f6996d8e513 container test-container: STEP: delete the pod Apr 9 00:04:38.391: INFO: Waiting for pod pod-0a21803e-7058-400c-b841-8f6996d8e513 to disappear Apr 9 00:04:38.400: INFO: Pod pod-0a21803e-7058-400c-b841-8f6996d8e513 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:04:38.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7563" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":2069,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:04:38.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 9 00:04:38.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9178' Apr 9 00:04:38.761: INFO: stderr: "" Apr 9 00:04:38.761: INFO: stdout: "pod/pause created\n" Apr 9 00:04:38.761: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 9 00:04:38.761: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9178" to be "running and ready" Apr 9 00:04:38.788: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 26.496229ms Apr 9 00:04:40.792: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030712239s Apr 9 00:04:42.796: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.035268076s Apr 9 00:04:42.796: INFO: Pod "pause" satisfied condition "running and ready" Apr 9 00:04:42.796: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 9 00:04:42.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9178' Apr 9 00:04:42.895: INFO: stderr: "" Apr 9 00:04:42.895: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 9 00:04:42.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9178' Apr 9 00:04:42.999: INFO: stderr: "" Apr 9 00:04:42.999: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 9 00:04:42.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9178' Apr 9 00:04:43.093: INFO: stderr: "" Apr 9 00:04:43.094: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 9 00:04:43.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9178' Apr 9 00:04:43.182: INFO: stderr: "" Apr 9 00:04:43.182: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 9 00:04:43.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9178' Apr 9 00:04:43.278: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 00:04:43.278: INFO: stdout: "pod \"pause\" force deleted\n" Apr 9 00:04:43.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9178' Apr 9 00:04:43.367: INFO: stderr: "No resources found in kubectl-9178 namespace.\n" Apr 9 00:04:43.367: INFO: stdout: "" Apr 9 00:04:43.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9178 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 9 00:04:43.459: INFO: stderr: "" Apr 9 00:04:43.459: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:04:43.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9178" for this suite. • [SLOW TEST:5.077 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":119,"skipped":2077,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:04:43.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:04:43.696: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 9 00:04:43.722: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 9 00:04:48.726: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 9 00:04:48.726: INFO: Creating deployment "test-rolling-update-deployment" Apr 9 00:04:48.729: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 9 00:04:48.735: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 9 00:04:50.742: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 9 00:04:50.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987488, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987488, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987488, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987488, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 00:04:52.749: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 9 00:04:52.759: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9813 /apis/apps/v1/namespaces/deployment-9813/deployments/test-rolling-update-deployment 2ba5b0e3-369f-4931-93f9-9bc1dc438071 6542084 1 2020-04-09 00:04:48 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d2a5a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-09 00:04:48 +0000 UTC,LastTransitionTime:2020-04-09 00:04:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-09 00:04:51 +0000 UTC,LastTransitionTime:2020-04-09 00:04:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 9 00:04:52.762: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-9813 /apis/apps/v1/namespaces/deployment-9813/replicasets/test-rolling-update-deployment-664dd8fc7f c85bd6e4-0385-4984-bfc8-34d8cd3f6d76 6542073 1 2020-04-09 00:04:48 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2ba5b0e3-369f-4931-93f9-9bc1dc438071 0xc002d6ecb7 0xc002d6ecb8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d6ed28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 9 00:04:52.762: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 9 00:04:52.762: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9813 /apis/apps/v1/namespaces/deployment-9813/replicasets/test-rolling-update-controller 133a3f65-8f91-4900-81f8-daf7a022228b 6542082 2 2020-04-09 00:04:43 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2ba5b0e3-369f-4931-93f9-9bc1dc438071 0xc002d6ebe7 0xc002d6ebe8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d6ec48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 9 00:04:52.765: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-zr4cn" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-zr4cn test-rolling-update-deployment-664dd8fc7f- deployment-9813 /api/v1/namespaces/deployment-9813/pods/test-rolling-update-deployment-664dd8fc7f-zr4cn edcce35a-4f28-463e-90ea-437a70cc083a 6542072 0 2020-04-09 00:04:48 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f c85bd6e4-0385-4984-bfc8-34d8cd3f6d76 0xc002df5777 0xc002df5778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-62wp8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-62wp8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-62wp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 00:04:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 00:04:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 00:04:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 00:04:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.130,StartTime:2020-04-09 00:04:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-09 00:04:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://12d401103de31cee5b4c3b49b79721e2216426fff1482d22a61306663fb2687d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.130,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:04:52.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9813" for this suite. • [SLOW TEST:9.287 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":120,"skipped":2094,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:04:52.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-c2133c0e-7ff4-4748-b2dc-9aabc1c33fb9 STEP: Creating a pod to test consume secrets Apr 9 00:04:52.867: INFO: Waiting up to 5m0s for pod "pod-secrets-f826ffc4-c8f8-49cb-a52e-57952523fd92" in namespace "secrets-8015" to be "Succeeded or Failed" Apr 9 00:04:52.890: INFO: Pod "pod-secrets-f826ffc4-c8f8-49cb-a52e-57952523fd92": Phase="Pending", Reason="", readiness=false. Elapsed: 23.233737ms Apr 9 00:04:54.894: INFO: Pod "pod-secrets-f826ffc4-c8f8-49cb-a52e-57952523fd92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026852206s Apr 9 00:04:56.898: INFO: Pod "pod-secrets-f826ffc4-c8f8-49cb-a52e-57952523fd92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030894282s STEP: Saw pod success Apr 9 00:04:56.898: INFO: Pod "pod-secrets-f826ffc4-c8f8-49cb-a52e-57952523fd92" satisfied condition "Succeeded or Failed" Apr 9 00:04:56.914: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-f826ffc4-c8f8-49cb-a52e-57952523fd92 container secret-volume-test: STEP: delete the pod Apr 9 00:04:56.999: INFO: Waiting for pod pod-secrets-f826ffc4-c8f8-49cb-a52e-57952523fd92 to disappear Apr 9 00:04:57.018: INFO: Pod pod-secrets-f826ffc4-c8f8-49cb-a52e-57952523fd92 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:04:57.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8015" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:04:57.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:05:08.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2609" for this suite. • [SLOW TEST:11.142 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":122,"skipped":2218,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:05:08.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 9 00:05:16.302: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 9 00:05:16.309: INFO: Pod pod-with-poststart-exec-hook still exists Apr 9 00:05:18.309: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 9 00:05:18.313: INFO: Pod pod-with-poststart-exec-hook still exists Apr 9 00:05:20.309: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 9 00:05:20.314: INFO: Pod pod-with-poststart-exec-hook still exists Apr 9 00:05:22.309: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 9 00:05:22.314: INFO: Pod pod-with-poststart-exec-hook still exists Apr 9 00:05:24.309: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 9 00:05:24.313: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:05:24.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7944" for this suite. • [SLOW TEST:16.151 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2221,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:05:24.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:05:24.896: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:05:26.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987524, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987524, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987524, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987524, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:05:29.947: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:05:30.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7355" for this suite. STEP: Destroying namespace "webhook-7355-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.968 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":124,"skipped":2226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:05:30.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:05:30.864: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:05:32.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987530, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987530, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987530, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987530, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 00:05:34.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987530, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987530, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987530, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987530, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:05:37.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:05:38.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2974" for this suite. STEP: Destroying namespace "webhook-2974-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.892 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":125,"skipped":2252,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:05:38.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 9 00:05:38.752: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 9 00:05:40.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987538, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987538, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987538, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987538, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:05:43.821: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:05:43.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:05:44.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7974" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.882 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":126,"skipped":2274,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:05:45.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:05:45.158: INFO: Waiting up to 5m0s for pod "downwardapi-volume-752d4ae1-2e5a-458f-8276-a0c080d7877c" in namespace "downward-api-8855" to be "Succeeded or Failed" Apr 9 00:05:45.176: INFO: Pod "downwardapi-volume-752d4ae1-2e5a-458f-8276-a0c080d7877c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.611752ms Apr 9 00:05:47.180: INFO: Pod "downwardapi-volume-752d4ae1-2e5a-458f-8276-a0c080d7877c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021986253s Apr 9 00:05:49.185: INFO: Pod "downwardapi-volume-752d4ae1-2e5a-458f-8276-a0c080d7877c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026546393s STEP: Saw pod success Apr 9 00:05:49.185: INFO: Pod "downwardapi-volume-752d4ae1-2e5a-458f-8276-a0c080d7877c" satisfied condition "Succeeded or Failed" Apr 9 00:05:49.188: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-752d4ae1-2e5a-458f-8276-a0c080d7877c container client-container: STEP: delete the pod Apr 9 00:05:49.221: INFO: Waiting for pod downwardapi-volume-752d4ae1-2e5a-458f-8276-a0c080d7877c to disappear Apr 9 00:05:49.245: INFO: Pod downwardapi-volume-752d4ae1-2e5a-458f-8276-a0c080d7877c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:05:49.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8855" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:05:49.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 9 00:05:49.333: INFO: Waiting up to 5m0s for pod "downward-api-6638f009-ac39-4c8c-98c4-71f8f9f019a4" in namespace "downward-api-6810" to be "Succeeded or Failed" Apr 9 00:05:49.336: INFO: Pod "downward-api-6638f009-ac39-4c8c-98c4-71f8f9f019a4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.574239ms Apr 9 00:05:51.341: INFO: Pod "downward-api-6638f009-ac39-4c8c-98c4-71f8f9f019a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0084408s Apr 9 00:05:53.345: INFO: Pod "downward-api-6638f009-ac39-4c8c-98c4-71f8f9f019a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01199447s STEP: Saw pod success Apr 9 00:05:53.345: INFO: Pod "downward-api-6638f009-ac39-4c8c-98c4-71f8f9f019a4" satisfied condition "Succeeded or Failed" Apr 9 00:05:53.347: INFO: Trying to get logs from node latest-worker2 pod downward-api-6638f009-ac39-4c8c-98c4-71f8f9f019a4 container dapi-container: STEP: delete the pod Apr 9 00:05:53.409: INFO: Waiting for pod downward-api-6638f009-ac39-4c8c-98c4-71f8f9f019a4 to disappear Apr 9 00:05:53.414: INFO: Pod downward-api-6638f009-ac39-4c8c-98c4-71f8f9f019a4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:05:53.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6810" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2318,"failed":0} SSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:05:53.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 9 00:05:58.023: INFO: Successfully updated pod "adopt-release-ggwpf" STEP: Checking that the Job readopts the Pod Apr 9 00:05:58.023: INFO: Waiting up to 15m0s for pod "adopt-release-ggwpf" in namespace "job-5119" to be "adopted" Apr 9 00:05:58.026: INFO: Pod "adopt-release-ggwpf": Phase="Running", Reason="", readiness=true. Elapsed: 2.98058ms Apr 9 00:06:00.031: INFO: Pod "adopt-release-ggwpf": Phase="Running", Reason="", readiness=true. Elapsed: 2.007428052s Apr 9 00:06:00.031: INFO: Pod "adopt-release-ggwpf" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 9 00:06:00.540: INFO: Successfully updated pod "adopt-release-ggwpf" STEP: Checking that the Job releases the Pod Apr 9 00:06:00.540: INFO: Waiting up to 15m0s for pod "adopt-release-ggwpf" in namespace "job-5119" to be "released" Apr 9 00:06:00.544: INFO: Pod "adopt-release-ggwpf": Phase="Running", Reason="", readiness=true. Elapsed: 4.333173ms Apr 9 00:06:02.548: INFO: Pod "adopt-release-ggwpf": Phase="Running", Reason="", readiness=true. Elapsed: 2.007719058s Apr 9 00:06:02.548: INFO: Pod "adopt-release-ggwpf" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:06:02.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5119" for this suite. • [SLOW TEST:9.136 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":129,"skipped":2321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:06:02.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:06:02.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5826" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":130,"skipped":2391,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:06:02.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:06:03.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6155' Apr 9 00:06:03.281: INFO: stderr: "" Apr 9 00:06:03.281: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 9 00:06:03.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6155' Apr 9 00:06:03.525: INFO: stderr: "" Apr 9 00:06:03.525: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 9 00:06:04.540: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 00:06:04.540: INFO: Found 0 / 1 Apr 9 00:06:05.530: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 00:06:05.530: INFO: Found 0 / 1 Apr 9 00:06:06.532: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 00:06:06.532: INFO: Found 1 / 1 Apr 9 00:06:06.532: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 9 00:06:06.534: INFO: Selector matched 1 pods for map[app:agnhost] Apr 9 00:06:06.534: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 9 00:06:06.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-q2l47 --namespace=kubectl-6155' Apr 9 00:06:06.655: INFO: stderr: "" Apr 9 00:06:06.655: INFO: stdout: "Name: agnhost-master-q2l47\nNamespace: kubectl-6155\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Thu, 09 Apr 2020 00:06:03 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.176\nIPs:\n IP: 10.244.1.176\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://c3cddbd724b33d586f3143c984c53a961ab804b27de24b2ccf1f6ce4419f5152\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 09 Apr 2020 00:06:05 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-zz5hf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-zz5hf:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-zz5hf\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-6155/agnhost-master-q2l47 to latest-worker2\n Normal Pulled 2s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" Apr 9 00:06:06.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6155' Apr 9 00:06:06.771: INFO: stderr: "" Apr 9 00:06:06.771: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6155\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-q2l47\n" Apr 9 00:06:06.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6155' Apr 9 00:06:06.867: INFO: stderr: "" Apr 9 00:06:06.867: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6155\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.52.208\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.176:6379\nSession Affinity: None\nEvents: \n" Apr 9 00:06:06.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 9 00:06:06.998: INFO: stderr: "" Apr 9 00:06:06.998: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 09 Apr 2020 00:06:01 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 09 Apr 2020 00:03:22 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 09 Apr 2020 00:03:22 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 09 Apr 2020 00:03:22 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 09 Apr 2020 00:03:22 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 24d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 24d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 24d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 9 00:06:06.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-6155' Apr 9 00:06:07.098: INFO: stderr: "" Apr 9 00:06:07.098: INFO: stdout: "Name: kubectl-6155\nLabels: e2e-framework=kubectl\n e2e-run=ce5685a5-65b4-47d8-af76-e40748af99cd\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:06:07.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6155" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":131,"skipped":2405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:06:07.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 9 00:06:07.179: INFO: PodSpec: initContainers in spec.initContainers Apr 9 00:06:53.758: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b2e17a86-86b3-4443-b21b-f18d808cb727", GenerateName:"", Namespace:"init-container-5861", SelfLink:"/api/v1/namespaces/init-container-5861/pods/pod-init-b2e17a86-86b3-4443-b21b-f18d808cb727", UID:"f92342ec-0d64-43ee-8a9a-55abbd1abce3", ResourceVersion:"6542950", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721987567, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"179830349"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lxzg7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005b44000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lxzg7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lxzg7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lxzg7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002da2068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003c2000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002da20f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002da2110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002da2118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002da211c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987567, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987567, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987567, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987567, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.177", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.177"}}, StartTime:(*v1.Time)(0xc004478040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003c20e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003c21c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://879f1a58bacaf0d7863ce058d64945e9c6b75aab6781112db246508814fd7fb6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004478080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004478060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002da219f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:06:53.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5861" for this suite. • [SLOW TEST:46.686 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":132,"skipped":2442,"failed":0} [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:06:53.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 9 00:06:53.923: INFO: Waiting up to 5m0s for pod "client-containers-3a66bffc-11aa-4478-9248-99ed81edc397" in namespace "containers-5481" to be "Succeeded or Failed" Apr 9 00:06:53.940: INFO: Pod "client-containers-3a66bffc-11aa-4478-9248-99ed81edc397": Phase="Pending", Reason="", readiness=false. Elapsed: 17.005648ms Apr 9 00:06:55.944: INFO: Pod "client-containers-3a66bffc-11aa-4478-9248-99ed81edc397": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020584024s Apr 9 00:06:57.949: INFO: Pod "client-containers-3a66bffc-11aa-4478-9248-99ed81edc397": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025247062s STEP: Saw pod success Apr 9 00:06:57.949: INFO: Pod "client-containers-3a66bffc-11aa-4478-9248-99ed81edc397" satisfied condition "Succeeded or Failed" Apr 9 00:06:57.952: INFO: Trying to get logs from node latest-worker pod client-containers-3a66bffc-11aa-4478-9248-99ed81edc397 container test-container: STEP: delete the pod Apr 9 00:06:57.988: INFO: Waiting for pod client-containers-3a66bffc-11aa-4478-9248-99ed81edc397 to disappear Apr 9 00:06:57.997: INFO: Pod client-containers-3a66bffc-11aa-4478-9248-99ed81edc397 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:06:57.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5481" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2442,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:06:58.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:06:58.559: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:07:00.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987618, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987618, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987618, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:07:03.593: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 9 00:07:03.615: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:07:03.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8518" for this suite. STEP: Destroying namespace "webhook-8518-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.770 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":134,"skipped":2448,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:07:03.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:07:03.848: INFO: Waiting up to 5m0s for pod "downwardapi-volume-848e05fe-f7dc-4eec-a92c-51cb8560b10f" in namespace "projected-9282" to be "Succeeded or Failed" Apr 9 00:07:03.851: INFO: Pod "downwardapi-volume-848e05fe-f7dc-4eec-a92c-51cb8560b10f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.955582ms Apr 9 00:07:05.856: INFO: Pod "downwardapi-volume-848e05fe-f7dc-4eec-a92c-51cb8560b10f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00758054s Apr 9 00:07:07.860: INFO: Pod "downwardapi-volume-848e05fe-f7dc-4eec-a92c-51cb8560b10f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01204786s STEP: Saw pod success Apr 9 00:07:07.860: INFO: Pod "downwardapi-volume-848e05fe-f7dc-4eec-a92c-51cb8560b10f" satisfied condition "Succeeded or Failed" Apr 9 00:07:07.863: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-848e05fe-f7dc-4eec-a92c-51cb8560b10f container client-container: STEP: delete the pod Apr 9 00:07:07.890: INFO: Waiting for pod downwardapi-volume-848e05fe-f7dc-4eec-a92c-51cb8560b10f to disappear Apr 9 00:07:07.911: INFO: Pod downwardapi-volume-848e05fe-f7dc-4eec-a92c-51cb8560b10f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:07:07.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9282" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:07:07.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 9 00:07:08.510: INFO: Pod name wrapped-volume-race-95e48a77-84f9-48cb-99f5-9d768a95b7b4: Found 0 pods out of 5 Apr 9 00:07:13.518: INFO: Pod name wrapped-volume-race-95e48a77-84f9-48cb-99f5-9d768a95b7b4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-95e48a77-84f9-48cb-99f5-9d768a95b7b4 in namespace emptydir-wrapper-5873, will wait for the garbage collector to delete the pods Apr 9 00:07:27.684: INFO: Deleting ReplicationController wrapped-volume-race-95e48a77-84f9-48cb-99f5-9d768a95b7b4 took: 8.014954ms Apr 9 00:07:27.984: INFO: Terminating ReplicationController wrapped-volume-race-95e48a77-84f9-48cb-99f5-9d768a95b7b4 pods took: 300.250983ms STEP: Creating RC which spawns configmap-volume pods Apr 9 00:07:43.808: INFO: Pod name wrapped-volume-race-18557a09-bdae-48f0-afc0-01cc0371aa09: Found 0 pods out of 5 Apr 9 00:07:48.815: INFO: Pod name wrapped-volume-race-18557a09-bdae-48f0-afc0-01cc0371aa09: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-18557a09-bdae-48f0-afc0-01cc0371aa09 in namespace emptydir-wrapper-5873, will wait for the garbage collector to delete the pods Apr 9 00:08:02.898: INFO: Deleting ReplicationController wrapped-volume-race-18557a09-bdae-48f0-afc0-01cc0371aa09 took: 7.523732ms Apr 9 00:08:03.298: INFO: Terminating ReplicationController wrapped-volume-race-18557a09-bdae-48f0-afc0-01cc0371aa09 pods took: 400.263199ms STEP: Creating RC which spawns configmap-volume pods Apr 9 00:08:13.051: INFO: Pod name wrapped-volume-race-90fe769d-08c3-4edb-ba7d-5e8d1d815de5: Found 0 pods out of 5 Apr 9 00:08:18.066: INFO: Pod name wrapped-volume-race-90fe769d-08c3-4edb-ba7d-5e8d1d815de5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-90fe769d-08c3-4edb-ba7d-5e8d1d815de5 in namespace emptydir-wrapper-5873, will wait for the garbage collector to delete the pods Apr 9 00:08:32.156: INFO: Deleting ReplicationController wrapped-volume-race-90fe769d-08c3-4edb-ba7d-5e8d1d815de5 took: 9.596711ms Apr 9 00:08:32.456: INFO: Terminating ReplicationController wrapped-volume-race-90fe769d-08c3-4edb-ba7d-5e8d1d815de5 pods took: 300.245022ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:08:44.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5873" for this suite. • [SLOW TEST:96.523 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":136,"skipped":2495,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:08:44.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:08:44.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 9 00:08:45.078: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T00:08:45Z generation:1 name:name1 resourceVersion:6544061 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cce101c0-fedd-4bfc-87aa-1a320b72baa6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 9 00:08:55.083: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T00:08:55Z generation:1 name:name2 resourceVersion:6544276 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:38b7d994-c379-4ed8-9208-66627034f5dd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 9 00:09:05.090: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T00:08:45Z generation:2 name:name1 resourceVersion:6544306 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cce101c0-fedd-4bfc-87aa-1a320b72baa6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 9 00:09:15.096: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T00:08:55Z generation:2 name:name2 resourceVersion:6544336 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:38b7d994-c379-4ed8-9208-66627034f5dd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 9 00:09:25.104: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T00:08:45Z generation:2 name:name1 resourceVersion:6544366 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cce101c0-fedd-4bfc-87aa-1a320b72baa6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 9 00:09:35.119: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-09T00:08:55Z generation:2 name:name2 resourceVersion:6544396 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:38b7d994-c379-4ed8-9208-66627034f5dd] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:09:45.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1139" for this suite. • [SLOW TEST:61.196 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":137,"skipped":2512,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:09:45.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:09:46.387: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:09:48.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987786, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987786, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987786, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721987786, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:09:51.465: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:09:51.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7427" for this suite. STEP: Destroying namespace "webhook-7427-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.281 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":138,"skipped":2514,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:09:51.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6253 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6253 STEP: Creating statefulset with conflicting port in namespace statefulset-6253 STEP: Waiting until pod test-pod will start running in namespace statefulset-6253 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6253 Apr 9 00:09:56.092: INFO: Observed stateful pod in namespace: statefulset-6253, name: ss-0, uid: b904f42f-7539-4214-8bc8-b6080cd78ef3, status phase: Pending. Waiting for statefulset controller to delete. Apr 9 00:09:56.475: INFO: Observed stateful pod in namespace: statefulset-6253, name: ss-0, uid: b904f42f-7539-4214-8bc8-b6080cd78ef3, status phase: Failed. Waiting for statefulset controller to delete. Apr 9 00:09:56.485: INFO: Observed stateful pod in namespace: statefulset-6253, name: ss-0, uid: b904f42f-7539-4214-8bc8-b6080cd78ef3, status phase: Failed. Waiting for statefulset controller to delete. Apr 9 00:09:56.506: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6253 STEP: Removing pod with conflicting port in namespace statefulset-6253 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6253 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 9 00:10:00.599: INFO: Deleting all statefulset in ns statefulset-6253 Apr 9 00:10:00.603: INFO: Scaling statefulset ss to 0 Apr 9 00:10:20.642: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 00:10:20.645: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:10:20.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6253" for this suite. • [SLOW TEST:28.762 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":139,"skipped":2525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:10:20.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 9 00:10:20.750: INFO: Waiting up to 5m0s for pod "var-expansion-8e0ec10f-ca92-42e1-94da-6a6ae9beab19" in namespace "var-expansion-6567" to be "Succeeded or Failed" Apr 9 00:10:20.758: INFO: Pod "var-expansion-8e0ec10f-ca92-42e1-94da-6a6ae9beab19": Phase="Pending", Reason="", readiness=false. Elapsed: 8.662592ms Apr 9 00:10:22.763: INFO: Pod "var-expansion-8e0ec10f-ca92-42e1-94da-6a6ae9beab19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012856488s Apr 9 00:10:24.767: INFO: Pod "var-expansion-8e0ec10f-ca92-42e1-94da-6a6ae9beab19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016968932s STEP: Saw pod success Apr 9 00:10:24.767: INFO: Pod "var-expansion-8e0ec10f-ca92-42e1-94da-6a6ae9beab19" satisfied condition "Succeeded or Failed" Apr 9 00:10:24.770: INFO: Trying to get logs from node latest-worker pod var-expansion-8e0ec10f-ca92-42e1-94da-6a6ae9beab19 container dapi-container: STEP: delete the pod Apr 9 00:10:24.843: INFO: Waiting for pod var-expansion-8e0ec10f-ca92-42e1-94da-6a6ae9beab19 to disappear Apr 9 00:10:24.848: INFO: Pod var-expansion-8e0ec10f-ca92-42e1-94da-6a6ae9beab19 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:10:24.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6567" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2562,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:10:24.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0409 00:10:26.089703 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 9 00:10:26.089: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:10:26.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1681" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":141,"skipped":2595,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:10:26.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:10:26.268: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb43e781-3f64-45d8-84d3-253126e77c55" in namespace "projected-9681" to be "Succeeded or Failed" Apr 9 00:10:26.305: INFO: Pod "downwardapi-volume-fb43e781-3f64-45d8-84d3-253126e77c55": Phase="Pending", Reason="", readiness=false. Elapsed: 37.085515ms Apr 9 00:10:28.308: INFO: Pod "downwardapi-volume-fb43e781-3f64-45d8-84d3-253126e77c55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04066376s Apr 9 00:10:30.312: INFO: Pod "downwardapi-volume-fb43e781-3f64-45d8-84d3-253126e77c55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044705544s STEP: Saw pod success Apr 9 00:10:30.313: INFO: Pod "downwardapi-volume-fb43e781-3f64-45d8-84d3-253126e77c55" satisfied condition "Succeeded or Failed" Apr 9 00:10:30.315: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fb43e781-3f64-45d8-84d3-253126e77c55 container client-container: STEP: delete the pod Apr 9 00:10:30.376: INFO: Waiting for pod downwardapi-volume-fb43e781-3f64-45d8-84d3-253126e77c55 to disappear Apr 9 00:10:30.387: INFO: Pod downwardapi-volume-fb43e781-3f64-45d8-84d3-253126e77c55 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:10:30.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9681" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2602,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:10:30.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-04134c4e-1f16-46de-9995-70cf711c3947 in namespace container-probe-693 Apr 9 00:10:34.475: INFO: Started pod liveness-04134c4e-1f16-46de-9995-70cf711c3947 in namespace container-probe-693 STEP: checking the pod's current state and verifying that restartCount is present Apr 9 00:10:34.478: INFO: Initial restart count of pod liveness-04134c4e-1f16-46de-9995-70cf711c3947 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:14:35.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-693" for this suite. • [SLOW TEST:244.688 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2604,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:14:35.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-6dfq STEP: Creating a pod to test atomic-volume-subpath Apr 9 00:14:35.369: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6dfq" in namespace "subpath-5089" to be "Succeeded or Failed" Apr 9 00:14:35.451: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Pending", Reason="", readiness=false. Elapsed: 81.128767ms Apr 9 00:14:37.454: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084537529s Apr 9 00:14:39.458: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Running", Reason="", readiness=true. Elapsed: 4.088554205s Apr 9 00:14:41.462: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Running", Reason="", readiness=true. Elapsed: 6.092742102s Apr 9 00:14:43.466: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Running", Reason="", readiness=true. Elapsed: 8.096281552s Apr 9 00:14:45.470: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Running", Reason="", readiness=true. Elapsed: 10.100217341s Apr 9 00:14:47.474: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Running", Reason="", readiness=true. Elapsed: 12.104903941s Apr 9 00:14:49.478: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Running", Reason="", readiness=true. Elapsed: 14.108586877s Apr 9 00:14:51.482: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Running", Reason="", readiness=true. Elapsed: 16.112780352s Apr 9 00:14:53.499: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Running", Reason="", readiness=true. Elapsed: 18.129205603s Apr 9 00:14:55.503: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Running", Reason="", readiness=true. Elapsed: 20.133824801s Apr 9 00:14:57.507: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Running", Reason="", readiness=true. Elapsed: 22.137345059s Apr 9 00:14:59.511: INFO: Pod "pod-subpath-test-downwardapi-6dfq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.141131201s STEP: Saw pod success Apr 9 00:14:59.511: INFO: Pod "pod-subpath-test-downwardapi-6dfq" satisfied condition "Succeeded or Failed" Apr 9 00:14:59.514: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-6dfq container test-container-subpath-downwardapi-6dfq: STEP: delete the pod Apr 9 00:14:59.565: INFO: Waiting for pod pod-subpath-test-downwardapi-6dfq to disappear Apr 9 00:14:59.567: INFO: Pod pod-subpath-test-downwardapi-6dfq no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-6dfq Apr 9 00:14:59.567: INFO: Deleting pod "pod-subpath-test-downwardapi-6dfq" in namespace "subpath-5089" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:14:59.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5089" for this suite. • [SLOW TEST:24.496 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":144,"skipped":2622,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:14:59.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 9 00:15:07.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 9 00:15:07.727: INFO: Pod pod-with-prestop-http-hook still exists Apr 9 00:15:09.727: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 9 00:15:09.731: INFO: Pod pod-with-prestop-http-hook still exists Apr 9 00:15:11.727: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 9 00:15:11.731: INFO: Pod pod-with-prestop-http-hook still exists Apr 9 00:15:13.727: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 9 00:15:13.730: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:15:13.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3814" for this suite. • [SLOW TEST:14.196 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:15:13.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:15:13.824: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fe47137b-0516-4d95-b4c6-74c2c8c7ee37" in namespace "security-context-test-7353" to be "Succeeded or Failed" Apr 9 00:15:13.840: INFO: Pod "busybox-user-65534-fe47137b-0516-4d95-b4c6-74c2c8c7ee37": Phase="Pending", Reason="", readiness=false. Elapsed: 16.022537ms Apr 9 00:15:15.853: INFO: Pod "busybox-user-65534-fe47137b-0516-4d95-b4c6-74c2c8c7ee37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028998315s Apr 9 00:15:17.857: INFO: Pod "busybox-user-65534-fe47137b-0516-4d95-b4c6-74c2c8c7ee37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032897851s Apr 9 00:15:17.857: INFO: Pod "busybox-user-65534-fe47137b-0516-4d95-b4c6-74c2c8c7ee37" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:15:17.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7353" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2671,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:15:17.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 9 00:15:17.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7707' Apr 9 00:15:21.427: INFO: stderr: "" Apr 9 00:15:21.427: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 9 00:15:21.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707' Apr 9 00:15:21.565: INFO: stderr: "" Apr 9 00:15:21.565: INFO: stdout: "update-demo-nautilus-gbwg9 update-demo-nautilus-pmkp4 " Apr 9 00:15:21.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gbwg9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:21.652: INFO: stderr: "" Apr 9 00:15:21.652: INFO: stdout: "" Apr 9 00:15:21.652: INFO: update-demo-nautilus-gbwg9 is created but not running Apr 9 00:15:26.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707' Apr 9 00:15:26.757: INFO: stderr: "" Apr 9 00:15:26.758: INFO: stdout: "update-demo-nautilus-gbwg9 update-demo-nautilus-pmkp4 " Apr 9 00:15:26.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gbwg9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:26.857: INFO: stderr: "" Apr 9 00:15:26.857: INFO: stdout: "true" Apr 9 00:15:26.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gbwg9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:26.945: INFO: stderr: "" Apr 9 00:15:26.945: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 00:15:26.945: INFO: validating pod update-demo-nautilus-gbwg9 Apr 9 00:15:26.949: INFO: got data: { "image": "nautilus.jpg" } Apr 9 00:15:26.949: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 00:15:26.949: INFO: update-demo-nautilus-gbwg9 is verified up and running Apr 9 00:15:26.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pmkp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:27.042: INFO: stderr: "" Apr 9 00:15:27.042: INFO: stdout: "true" Apr 9 00:15:27.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pmkp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:27.134: INFO: stderr: "" Apr 9 00:15:27.134: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 00:15:27.134: INFO: validating pod update-demo-nautilus-pmkp4 Apr 9 00:15:27.139: INFO: got data: { "image": "nautilus.jpg" } Apr 9 00:15:27.139: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 00:15:27.139: INFO: update-demo-nautilus-pmkp4 is verified up and running STEP: scaling down the replication controller Apr 9 00:15:27.141: INFO: scanned /root for discovery docs: Apr 9 00:15:27.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7707' Apr 9 00:15:28.265: INFO: stderr: "" Apr 9 00:15:28.265: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 9 00:15:28.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707' Apr 9 00:15:28.362: INFO: stderr: "" Apr 9 00:15:28.362: INFO: stdout: "update-demo-nautilus-gbwg9 update-demo-nautilus-pmkp4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 9 00:15:33.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707' Apr 9 00:15:33.457: INFO: stderr: "" Apr 9 00:15:33.457: INFO: stdout: "update-demo-nautilus-pmkp4 " Apr 9 00:15:33.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pmkp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:33.549: INFO: stderr: "" Apr 9 00:15:33.549: INFO: stdout: "true" Apr 9 00:15:33.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pmkp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:33.638: INFO: stderr: "" Apr 9 00:15:33.638: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 00:15:33.638: INFO: validating pod update-demo-nautilus-pmkp4 Apr 9 00:15:33.642: INFO: got data: { "image": "nautilus.jpg" } Apr 9 00:15:33.642: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 00:15:33.642: INFO: update-demo-nautilus-pmkp4 is verified up and running STEP: scaling up the replication controller Apr 9 00:15:33.644: INFO: scanned /root for discovery docs: Apr 9 00:15:33.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7707' Apr 9 00:15:34.752: INFO: stderr: "" Apr 9 00:15:34.752: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 9 00:15:34.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707' Apr 9 00:15:34.852: INFO: stderr: "" Apr 9 00:15:34.852: INFO: stdout: "update-demo-nautilus-gp2rq update-demo-nautilus-pmkp4 " Apr 9 00:15:34.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gp2rq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:34.933: INFO: stderr: "" Apr 9 00:15:34.933: INFO: stdout: "" Apr 9 00:15:34.933: INFO: update-demo-nautilus-gp2rq is created but not running Apr 9 00:15:39.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7707' Apr 9 00:15:40.041: INFO: stderr: "" Apr 9 00:15:40.041: INFO: stdout: "update-demo-nautilus-gp2rq update-demo-nautilus-pmkp4 " Apr 9 00:15:40.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gp2rq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:40.136: INFO: stderr: "" Apr 9 00:15:40.136: INFO: stdout: "true" Apr 9 00:15:40.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gp2rq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:40.212: INFO: stderr: "" Apr 9 00:15:40.212: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 00:15:40.212: INFO: validating pod update-demo-nautilus-gp2rq Apr 9 00:15:40.215: INFO: got data: { "image": "nautilus.jpg" } Apr 9 00:15:40.215: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 00:15:40.215: INFO: update-demo-nautilus-gp2rq is verified up and running Apr 9 00:15:40.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pmkp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:40.298: INFO: stderr: "" Apr 9 00:15:40.298: INFO: stdout: "true" Apr 9 00:15:40.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pmkp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7707' Apr 9 00:15:40.393: INFO: stderr: "" Apr 9 00:15:40.393: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 00:15:40.393: INFO: validating pod update-demo-nautilus-pmkp4 Apr 9 00:15:40.396: INFO: got data: { "image": "nautilus.jpg" } Apr 9 00:15:40.396: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 00:15:40.396: INFO: update-demo-nautilus-pmkp4 is verified up and running STEP: using delete to clean up resources Apr 9 00:15:40.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7707' Apr 9 00:15:40.497: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 00:15:40.498: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 9 00:15:40.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7707' Apr 9 00:15:40.595: INFO: stderr: "No resources found in kubectl-7707 namespace.\n" Apr 9 00:15:40.595: INFO: stdout: "" Apr 9 00:15:40.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7707 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 9 00:15:40.693: INFO: stderr: "" Apr 9 00:15:40.693: INFO: stdout: "update-demo-nautilus-gp2rq\nupdate-demo-nautilus-pmkp4\n" Apr 9 00:15:41.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7707' Apr 9 00:15:41.295: INFO: stderr: "No resources found in kubectl-7707 namespace.\n" Apr 9 00:15:41.295: INFO: stdout: "" Apr 9 00:15:41.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7707 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 9 00:15:41.393: INFO: stderr: "" Apr 9 00:15:41.393: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:15:41.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7707" for this suite. • [SLOW TEST:23.535 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":147,"skipped":2707,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:15:41.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 9 00:15:41.640: INFO: Waiting up to 5m0s for pod "pod-a6017876-2cbc-44c5-88ae-f555f218123d" in namespace "emptydir-7726" to be "Succeeded or Failed" Apr 9 00:15:41.643: INFO: Pod "pod-a6017876-2cbc-44c5-88ae-f555f218123d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.116121ms Apr 9 00:15:43.648: INFO: Pod "pod-a6017876-2cbc-44c5-88ae-f555f218123d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007428635s Apr 9 00:15:45.652: INFO: Pod "pod-a6017876-2cbc-44c5-88ae-f555f218123d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012018952s STEP: Saw pod success Apr 9 00:15:45.652: INFO: Pod "pod-a6017876-2cbc-44c5-88ae-f555f218123d" satisfied condition "Succeeded or Failed" Apr 9 00:15:45.656: INFO: Trying to get logs from node latest-worker2 pod pod-a6017876-2cbc-44c5-88ae-f555f218123d container test-container: STEP: delete the pod Apr 9 00:15:45.692: INFO: Waiting for pod pod-a6017876-2cbc-44c5-88ae-f555f218123d to disappear Apr 9 00:15:45.720: INFO: Pod pod-a6017876-2cbc-44c5-88ae-f555f218123d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:15:45.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7726" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2707,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:15:45.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:15:45.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 9 00:15:45.921: INFO: stderr: "" Apr 9 00:15:45.921: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:15:45.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5000" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":149,"skipped":2727,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:15:45.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:15:46.012: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aab371ed-acd8-4df0-bcce-402d54bc36f5" in namespace "projected-4414" to be "Succeeded or Failed" Apr 9 00:15:46.015: INFO: Pod "downwardapi-volume-aab371ed-acd8-4df0-bcce-402d54bc36f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.219088ms Apr 9 00:15:48.026: INFO: Pod "downwardapi-volume-aab371ed-acd8-4df0-bcce-402d54bc36f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014694503s Apr 9 00:15:50.031: INFO: Pod "downwardapi-volume-aab371ed-acd8-4df0-bcce-402d54bc36f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018991073s STEP: Saw pod success Apr 9 00:15:50.031: INFO: Pod "downwardapi-volume-aab371ed-acd8-4df0-bcce-402d54bc36f5" satisfied condition "Succeeded or Failed" Apr 9 00:15:50.033: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-aab371ed-acd8-4df0-bcce-402d54bc36f5 container client-container: STEP: delete the pod Apr 9 00:15:50.068: INFO: Waiting for pod downwardapi-volume-aab371ed-acd8-4df0-bcce-402d54bc36f5 to disappear Apr 9 00:15:50.071: INFO: Pod downwardapi-volume-aab371ed-acd8-4df0-bcce-402d54bc36f5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:15:50.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4414" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2780,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:15:50.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:15:50.185: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 9 00:15:50.191: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:15:50.196: INFO: Number of nodes with available pods: 0 Apr 9 00:15:50.196: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:15:51.249: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:15:51.253: INFO: Number of nodes with available pods: 0 Apr 9 00:15:51.253: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:15:52.255: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:15:52.257: INFO: Number of nodes with available pods: 0 Apr 9 00:15:52.257: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:15:53.202: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:15:53.206: INFO: Number of nodes with available pods: 0 Apr 9 00:15:53.206: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:15:54.202: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:15:54.206: INFO: Number of nodes with available pods: 2 Apr 9 00:15:54.206: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 9 00:15:54.255: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:54.255: INFO: Wrong image for pod: daemon-set-cnz7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:54.262: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:15:55.272: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:55.272: INFO: Wrong image for pod: daemon-set-cnz7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:55.276: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:15:56.267: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:56.267: INFO: Wrong image for pod: daemon-set-cnz7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:56.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:15:57.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:57.266: INFO: Wrong image for pod: daemon-set-cnz7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:57.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:15:58.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:58.266: INFO: Wrong image for pod: daemon-set-cnz7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:58.266: INFO: Pod daemon-set-cnz7z is not available Apr 9 00:15:58.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:15:59.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:59.266: INFO: Wrong image for pod: daemon-set-cnz7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:15:59.266: INFO: Pod daemon-set-cnz7z is not available Apr 9 00:15:59.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:00.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:00.266: INFO: Wrong image for pod: daemon-set-cnz7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:00.266: INFO: Pod daemon-set-cnz7z is not available Apr 9 00:16:00.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:01.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:01.266: INFO: Wrong image for pod: daemon-set-cnz7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:01.266: INFO: Pod daemon-set-cnz7z is not available Apr 9 00:16:01.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:02.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:02.266: INFO: Wrong image for pod: daemon-set-cnz7z. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:02.266: INFO: Pod daemon-set-cnz7z is not available Apr 9 00:16:02.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:03.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:03.266: INFO: Pod daemon-set-hrz5k is not available Apr 9 00:16:03.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:04.265: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:04.265: INFO: Pod daemon-set-hrz5k is not available Apr 9 00:16:04.269: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:05.265: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:05.265: INFO: Pod daemon-set-hrz5k is not available Apr 9 00:16:05.269: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:06.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:06.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:07.267: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:07.267: INFO: Pod daemon-set-9r48t is not available Apr 9 00:16:07.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:08.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:08.266: INFO: Pod daemon-set-9r48t is not available Apr 9 00:16:08.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:09.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:09.266: INFO: Pod daemon-set-9r48t is not available Apr 9 00:16:09.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:10.272: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:10.272: INFO: Pod daemon-set-9r48t is not available Apr 9 00:16:10.276: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:11.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:11.266: INFO: Pod daemon-set-9r48t is not available Apr 9 00:16:11.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:12.266: INFO: Wrong image for pod: daemon-set-9r48t. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 9 00:16:12.266: INFO: Pod daemon-set-9r48t is not available Apr 9 00:16:12.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:13.266: INFO: Pod daemon-set-jhgv5 is not available Apr 9 00:16:13.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 9 00:16:13.273: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:13.275: INFO: Number of nodes with available pods: 1 Apr 9 00:16:13.275: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:16:14.280: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:14.283: INFO: Number of nodes with available pods: 1 Apr 9 00:16:14.283: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:16:15.281: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:15.285: INFO: Number of nodes with available pods: 1 Apr 9 00:16:15.285: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:16:16.281: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:16:16.284: INFO: Number of nodes with available pods: 2 Apr 9 00:16:16.284: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7068, will wait for the garbage collector to delete the pods Apr 9 00:16:16.356: INFO: Deleting DaemonSet.extensions daemon-set took: 6.227003ms Apr 9 00:16:16.656: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.246153ms Apr 9 00:16:23.074: INFO: Number of nodes with available pods: 0 Apr 9 00:16:23.074: INFO: Number of running nodes: 0, number of available pods: 0 Apr 9 00:16:23.077: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7068/daemonsets","resourceVersion":"6546239"},"items":null} Apr 9 00:16:23.080: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7068/pods","resourceVersion":"6546239"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:16:23.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7068" for this suite. • [SLOW TEST:33.019 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":151,"skipped":2798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:16:23.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:16:23.162: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Pending, waiting for it to be Running (with Ready = true) Apr 9 00:16:25.168: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Pending, waiting for it to be Running (with Ready = true) Apr 9 00:16:27.166: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Running (Ready = false) Apr 9 00:16:29.166: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Running (Ready = false) Apr 9 00:16:31.166: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Running (Ready = false) Apr 9 00:16:33.166: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Running (Ready = false) Apr 9 00:16:35.166: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Running (Ready = false) Apr 9 00:16:37.165: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Running (Ready = false) Apr 9 00:16:39.165: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Running (Ready = false) Apr 9 00:16:41.166: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Running (Ready = false) Apr 9 00:16:43.166: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Running (Ready = false) Apr 9 00:16:45.166: INFO: The status of Pod test-webserver-4045e711-1184-4ca5-8977-4f35ee0cc54a is Running (Ready = true) Apr 9 00:16:45.169: INFO: Container started at 2020-04-09 00:16:25 +0000 UTC, pod became ready at 2020-04-09 00:16:43 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:16:45.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5747" for this suite. • [SLOW TEST:22.082 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2823,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:16:45.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:16:45.669: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:16:47.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988205, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988205, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988205, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988205, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:16:50.747: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:16:50.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:16:51.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4985" for this suite. STEP: Destroying namespace "webhook-4985-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.830 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":153,"skipped":2823,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:16:52.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 9 00:16:52.158: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1198 /api/v1/namespaces/watch-1198/configmaps/e2e-watch-test-resource-version 4ded23a1-f400-4144-b434-1e20eb4a1d48 6546437 0 2020-04-09 00:16:52 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:16:52.158: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1198 /api/v1/namespaces/watch-1198/configmaps/e2e-watch-test-resource-version 4ded23a1-f400-4144-b434-1e20eb4a1d48 6546438 0 2020-04-09 00:16:52 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:16:52.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1198" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":154,"skipped":2829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:16:52.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:16:52.240: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-529f4a20-335d-437f-8d61-87b9bc36b69c" in namespace "security-context-test-9683" to be "Succeeded or Failed" Apr 9 00:16:52.257: INFO: Pod "busybox-readonly-false-529f4a20-335d-437f-8d61-87b9bc36b69c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.868632ms Apr 9 00:16:54.260: INFO: Pod "busybox-readonly-false-529f4a20-335d-437f-8d61-87b9bc36b69c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020594583s Apr 9 00:16:56.264: INFO: Pod "busybox-readonly-false-529f4a20-335d-437f-8d61-87b9bc36b69c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024359671s Apr 9 00:16:56.264: INFO: Pod "busybox-readonly-false-529f4a20-335d-437f-8d61-87b9bc36b69c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:16:56.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9683" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2890,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:16:56.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 9 00:16:56.354: INFO: >>> kubeConfig: /root/.kube/config Apr 9 00:16:58.312: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:17:09.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4283" for this suite. • [SLOW TEST:13.515 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":156,"skipped":2954,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:17:09.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 9 00:17:09.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2292' Apr 9 00:17:09.997: INFO: stderr: "" Apr 9 00:17:09.997: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 9 00:17:15.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2292 -o json' Apr 9 00:17:15.143: INFO: stderr: "" Apr 9 00:17:15.143: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-09T00:17:09Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2292\",\n \"resourceVersion\": \"6546580\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2292/pods/e2e-test-httpd-pod\",\n \"uid\": \"2d6fbc0c-a954-4cc9-9d41-df14e69067d4\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-kzbsv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-kzbsv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-kzbsv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-09T00:17:10Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-09T00:17:12Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-09T00:17:12Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-09T00:17:09Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://49629d0911777edeadf7eef4aa7badd4fbb1b714ab84e9ca3c8f4de3994ff82f\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-09T00:17:12Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.189\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.189\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-09T00:17:10Z\"\n }\n}\n" STEP: replace the image in the pod Apr 9 00:17:15.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2292' Apr 9 00:17:15.428: INFO: stderr: "" Apr 9 00:17:15.428: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 9 00:17:15.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2292' Apr 9 00:17:22.988: INFO: stderr: "" Apr 9 00:17:22.988: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:17:22.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2292" for this suite. • [SLOW TEST:13.207 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":157,"skipped":2964,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:17:22.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:17:23.061: INFO: Waiting up to 5m0s for pod "downwardapi-volume-207d8302-e2ad-4e4f-a9c2-6d0e152c051a" in namespace "projected-469" to be "Succeeded or Failed" Apr 9 00:17:23.065: INFO: Pod "downwardapi-volume-207d8302-e2ad-4e4f-a9c2-6d0e152c051a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186592ms Apr 9 00:17:25.069: INFO: Pod "downwardapi-volume-207d8302-e2ad-4e4f-a9c2-6d0e152c051a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008361085s Apr 9 00:17:27.074: INFO: Pod "downwardapi-volume-207d8302-e2ad-4e4f-a9c2-6d0e152c051a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012878455s STEP: Saw pod success Apr 9 00:17:27.074: INFO: Pod "downwardapi-volume-207d8302-e2ad-4e4f-a9c2-6d0e152c051a" satisfied condition "Succeeded or Failed" Apr 9 00:17:27.077: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-207d8302-e2ad-4e4f-a9c2-6d0e152c051a container client-container: STEP: delete the pod Apr 9 00:17:27.124: INFO: Waiting for pod downwardapi-volume-207d8302-e2ad-4e4f-a9c2-6d0e152c051a to disappear Apr 9 00:17:27.149: INFO: Pod downwardapi-volume-207d8302-e2ad-4e4f-a9c2-6d0e152c051a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:17:27.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-469" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2964,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:17:27.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 9 00:17:35.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 9 00:17:35.254: INFO: Pod pod-with-prestop-exec-hook still exists Apr 9 00:17:37.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 9 00:17:37.258: INFO: Pod pod-with-prestop-exec-hook still exists Apr 9 00:17:39.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 9 00:17:39.274: INFO: Pod pod-with-prestop-exec-hook still exists Apr 9 00:17:41.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 9 00:17:41.267: INFO: Pod pod-with-prestop-exec-hook still exists Apr 9 00:17:43.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 9 00:17:43.258: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:17:43.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3696" for this suite. • [SLOW TEST:16.152 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2970,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:17:43.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:17:43.805: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:17:45.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988263, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988263, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988263, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988263, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:17:48.843: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:17:48.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6057" for this suite. STEP: Destroying namespace "webhook-6057-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.790 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":160,"skipped":2976,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:17:49.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:18:04.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1453" for this suite. STEP: Destroying namespace "nsdeletetest-249" for this suite. Apr 9 00:18:04.296: INFO: Namespace nsdeletetest-249 was already deleted STEP: Destroying namespace "nsdeletetest-4195" for this suite. • [SLOW TEST:15.201 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":161,"skipped":2980,"failed":0} [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:18:04.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3986 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3986;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3986 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3986;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3986.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3986.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3986.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3986.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3986.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3986.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3986.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3986.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3986.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3986.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 64.227.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.227.64_udp@PTR;check="$$(dig +tcp +noall +answer +search 64.227.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.227.64_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3986 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3986;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3986 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3986;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3986.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3986.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3986.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3986.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3986.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3986.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3986.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3986.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3986.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3986.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3986.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 64.227.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.227.64_udp@PTR;check="$$(dig +tcp +noall +answer +search 64.227.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.227.64_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 00:18:10.527: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.531: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.534: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.537: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.540: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.544: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.547: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.551: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.570: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.572: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.575: INFO: Unable to read jessie_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.578: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.581: INFO: Unable to read jessie_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.584: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.587: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.590: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:10.607: INFO: Lookups using dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3986 wheezy_tcp@dns-test-service.dns-3986 wheezy_udp@dns-test-service.dns-3986.svc wheezy_tcp@dns-test-service.dns-3986.svc wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3986 jessie_tcp@dns-test-service.dns-3986 jessie_udp@dns-test-service.dns-3986.svc jessie_tcp@dns-test-service.dns-3986.svc jessie_udp@_http._tcp.dns-test-service.dns-3986.svc jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc] Apr 9 00:18:15.613: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.616: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.620: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.624: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.627: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.630: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.633: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.636: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.659: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.662: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.665: INFO: Unable to read jessie_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.668: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.671: INFO: Unable to read jessie_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.675: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.678: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.681: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:15.698: INFO: Lookups using dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3986 wheezy_tcp@dns-test-service.dns-3986 wheezy_udp@dns-test-service.dns-3986.svc wheezy_tcp@dns-test-service.dns-3986.svc wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3986 jessie_tcp@dns-test-service.dns-3986 jessie_udp@dns-test-service.dns-3986.svc jessie_tcp@dns-test-service.dns-3986.svc jessie_udp@_http._tcp.dns-test-service.dns-3986.svc jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc] Apr 9 00:18:20.612: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.615: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.617: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.620: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.623: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.626: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.629: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.633: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.655: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.658: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.660: INFO: Unable to read jessie_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.663: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.666: INFO: Unable to read jessie_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.669: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.672: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.675: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:20.692: INFO: Lookups using dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3986 wheezy_tcp@dns-test-service.dns-3986 wheezy_udp@dns-test-service.dns-3986.svc wheezy_tcp@dns-test-service.dns-3986.svc wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3986 jessie_tcp@dns-test-service.dns-3986 jessie_udp@dns-test-service.dns-3986.svc jessie_tcp@dns-test-service.dns-3986.svc jessie_udp@_http._tcp.dns-test-service.dns-3986.svc jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc] Apr 9 00:18:25.612: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.615: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.619: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.623: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.626: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.629: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.632: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.635: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.658: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.661: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.664: INFO: Unable to read jessie_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.667: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.670: INFO: Unable to read jessie_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.673: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.676: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.679: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:25.700: INFO: Lookups using dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3986 wheezy_tcp@dns-test-service.dns-3986 wheezy_udp@dns-test-service.dns-3986.svc wheezy_tcp@dns-test-service.dns-3986.svc wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3986 jessie_tcp@dns-test-service.dns-3986 jessie_udp@dns-test-service.dns-3986.svc jessie_tcp@dns-test-service.dns-3986.svc jessie_udp@_http._tcp.dns-test-service.dns-3986.svc jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc] Apr 9 00:18:30.612: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.616: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.620: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.624: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.627: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.630: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.633: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.636: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.658: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.661: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.664: INFO: Unable to read jessie_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.667: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.670: INFO: Unable to read jessie_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.673: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.676: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.679: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:30.701: INFO: Lookups using dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3986 wheezy_tcp@dns-test-service.dns-3986 wheezy_udp@dns-test-service.dns-3986.svc wheezy_tcp@dns-test-service.dns-3986.svc wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3986 jessie_tcp@dns-test-service.dns-3986 jessie_udp@dns-test-service.dns-3986.svc jessie_tcp@dns-test-service.dns-3986.svc jessie_udp@_http._tcp.dns-test-service.dns-3986.svc jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc] Apr 9 00:18:35.613: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.616: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.620: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.623: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.626: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.629: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.631: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.635: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.698: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.701: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.705: INFO: Unable to read jessie_udp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.708: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986 from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.711: INFO: Unable to read jessie_udp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.714: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.717: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.720: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc from pod dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722: the server could not find the requested resource (get pods dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722) Apr 9 00:18:35.738: INFO: Lookups using dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3986 wheezy_tcp@dns-test-service.dns-3986 wheezy_udp@dns-test-service.dns-3986.svc wheezy_tcp@dns-test-service.dns-3986.svc wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3986 jessie_tcp@dns-test-service.dns-3986 jessie_udp@dns-test-service.dns-3986.svc jessie_tcp@dns-test-service.dns-3986.svc jessie_udp@_http._tcp.dns-test-service.dns-3986.svc jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc] Apr 9 00:18:40.696: INFO: DNS probes using dns-3986/dns-test-19bfd5ba-a54d-47bb-ab8a-355e95e31722 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:18:41.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3986" for this suite. • [SLOW TEST:37.035 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":162,"skipped":2980,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:18:41.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 9 00:18:41.498: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-a 32641db0-87f0-440a-99bb-75de3fd1ff91 6547082 0 2020-04-09 00:18:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:18:41.498: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-a 32641db0-87f0-440a-99bb-75de3fd1ff91 6547082 0 2020-04-09 00:18:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 9 00:18:51.506: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-a 32641db0-87f0-440a-99bb-75de3fd1ff91 6547129 0 2020-04-09 00:18:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:18:51.507: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-a 32641db0-87f0-440a-99bb-75de3fd1ff91 6547129 0 2020-04-09 00:18:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 9 00:19:01.514: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-a 32641db0-87f0-440a-99bb-75de3fd1ff91 6547159 0 2020-04-09 00:18:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:19:01.515: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-a 32641db0-87f0-440a-99bb-75de3fd1ff91 6547159 0 2020-04-09 00:18:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 9 00:19:11.522: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-a 32641db0-87f0-440a-99bb-75de3fd1ff91 6547189 0 2020-04-09 00:18:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:19:11.522: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-a 32641db0-87f0-440a-99bb-75de3fd1ff91 6547189 0 2020-04-09 00:18:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 9 00:19:21.530: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-b 46e31f00-3829-4c66-be42-c422de036183 6547219 0 2020-04-09 00:19:21 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:19:21.530: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-b 46e31f00-3829-4c66-be42-c422de036183 6547219 0 2020-04-09 00:19:21 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 9 00:19:31.537: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-b 46e31f00-3829-4c66-be42-c422de036183 6547249 0 2020-04-09 00:19:21 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 9 00:19:31.537: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4676 /api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-configmap-b 46e31f00-3829-4c66-be42-c422de036183 6547249 0 2020-04-09 00:19:21 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:19:41.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4676" for this suite. • [SLOW TEST:60.213 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":163,"skipped":2991,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:19:41.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:19:45.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2312" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":3036,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:19:45.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:19:45.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5559" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":165,"skipped":3068,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:19:45.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-xv5hg in namespace proxy-2606 I0409 00:19:45.949755 7 runners.go:190] Created replication controller with name: proxy-service-xv5hg, namespace: proxy-2606, replica count: 1 I0409 00:19:47.000193 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 00:19:48.000391 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 00:19:49.000623 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 00:19:50.000865 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 00:19:51.001252 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 00:19:52.001497 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 00:19:53.001759 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 00:19:54.001999 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 00:19:55.002253 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 00:19:56.002472 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 00:19:57.002677 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0409 00:19:58.002920 7 runners.go:190] proxy-service-xv5hg Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 9 00:19:58.007: INFO: setup took 12.120661373s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 9 00:19:58.018: INFO: (0) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 11.511397ms) Apr 9 00:19:58.018: INFO: (0) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 11.470102ms) Apr 9 00:19:58.018: INFO: (0) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 11.502399ms) Apr 9 00:19:58.018: INFO: (0) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 11.53874ms) Apr 9 00:19:58.019: INFO: (0) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 11.669761ms) Apr 9 00:19:58.019: INFO: (0) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 11.725263ms) Apr 9 00:19:58.020: INFO: (0) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 13.305973ms) Apr 9 00:19:58.020: INFO: (0) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 13.449836ms) Apr 9 00:19:58.020: INFO: (0) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 13.458301ms) Apr 9 00:19:58.020: INFO: (0) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 13.622591ms) Apr 9 00:19:58.021: INFO: (0) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 14.409926ms) Apr 9 00:19:58.023: INFO: (0) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test<... (200; 4.605042ms) Apr 9 00:19:58.032: INFO: (1) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 4.316794ms) Apr 9 00:19:58.032: INFO: (1) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 4.666523ms) Apr 9 00:19:58.032: INFO: (1) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 4.924612ms) Apr 9 00:19:58.032: INFO: (1) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 4.968652ms) Apr 9 00:19:58.033: INFO: (1) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 5.146304ms) Apr 9 00:19:58.033: INFO: (1) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 5.234168ms) Apr 9 00:19:58.033: INFO: (1) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 5.109761ms) Apr 9 00:19:58.033: INFO: (1) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 5.478585ms) Apr 9 00:19:58.033: INFO: (1) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 5.688194ms) Apr 9 00:19:58.033: INFO: (1) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 5.768704ms) Apr 9 00:19:58.033: INFO: (1) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 5.754796ms) Apr 9 00:19:58.033: INFO: (1) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test (200; 5.216951ms) Apr 9 00:19:58.039: INFO: (2) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 5.258535ms) Apr 9 00:19:58.039: INFO: (2) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 5.254393ms) Apr 9 00:19:58.040: INFO: (2) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 6.215082ms) Apr 9 00:19:58.040: INFO: (2) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 6.527356ms) Apr 9 00:19:58.040: INFO: (2) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 6.589661ms) Apr 9 00:19:58.040: INFO: (2) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 6.590828ms) Apr 9 00:19:58.040: INFO: (2) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 6.707876ms) Apr 9 00:19:58.040: INFO: (2) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: ... (200; 4.10009ms) Apr 9 00:19:58.044: INFO: (3) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 4.077329ms) Apr 9 00:19:58.044: INFO: (3) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 4.168693ms) Apr 9 00:19:58.044: INFO: (3) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 4.166165ms) Apr 9 00:19:58.044: INFO: (3) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 4.148826ms) Apr 9 00:19:58.045: INFO: (3) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 4.370201ms) Apr 9 00:19:58.045: INFO: (3) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 4.555949ms) Apr 9 00:19:58.045: INFO: (3) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 4.48202ms) Apr 9 00:19:58.045: INFO: (3) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 4.915763ms) Apr 9 00:19:58.045: INFO: (3) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 4.916513ms) Apr 9 00:19:58.045: INFO: (3) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 4.958336ms) Apr 9 00:19:58.045: INFO: (3) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 4.964461ms) Apr 9 00:19:58.045: INFO: (3) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test (200; 3.617541ms) Apr 9 00:19:58.049: INFO: (4) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 3.678599ms) Apr 9 00:19:58.049: INFO: (4) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 3.710227ms) Apr 9 00:19:58.049: INFO: (4) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 3.837383ms) Apr 9 00:19:58.049: INFO: (4) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 3.906105ms) Apr 9 00:19:58.050: INFO: (4) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 4.126567ms) Apr 9 00:19:58.050: INFO: (4) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 4.262825ms) Apr 9 00:19:58.050: INFO: (4) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 4.40993ms) Apr 9 00:19:58.050: INFO: (4) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 4.375696ms) Apr 9 00:19:58.050: INFO: (4) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 4.415441ms) Apr 9 00:19:58.050: INFO: (4) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 4.684518ms) Apr 9 00:19:58.050: INFO: (4) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 4.62394ms) Apr 9 00:19:58.050: INFO: (4) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 4.704026ms) Apr 9 00:19:58.051: INFO: (4) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 4.897492ms) Apr 9 00:19:58.053: INFO: (5) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 2.510703ms) Apr 9 00:19:58.053: INFO: (5) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 2.623119ms) Apr 9 00:19:58.053: INFO: (5) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: ... (200; 6.400706ms) Apr 9 00:19:58.057: INFO: (5) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 6.685334ms) Apr 9 00:19:58.057: INFO: (5) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 6.86698ms) Apr 9 00:19:58.058: INFO: (5) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 6.909455ms) Apr 9 00:19:58.058: INFO: (5) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 6.952211ms) Apr 9 00:19:58.058: INFO: (5) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 7.083053ms) Apr 9 00:19:58.058: INFO: (5) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 7.34027ms) Apr 9 00:19:58.058: INFO: (5) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 7.489234ms) Apr 9 00:19:58.058: INFO: (5) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 7.657349ms) Apr 9 00:19:58.058: INFO: (5) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 7.654838ms) Apr 9 00:19:58.058: INFO: (5) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 7.894708ms) Apr 9 00:19:58.059: INFO: (5) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 7.895978ms) Apr 9 00:19:58.059: INFO: (5) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 8.186319ms) Apr 9 00:19:58.063: INFO: (6) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 3.899187ms) Apr 9 00:19:58.063: INFO: (6) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 3.926345ms) Apr 9 00:19:58.063: INFO: (6) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test<... (200; 4.007943ms) Apr 9 00:19:58.063: INFO: (6) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 3.951324ms) Apr 9 00:19:58.063: INFO: (6) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 4.083844ms) Apr 9 00:19:58.063: INFO: (6) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 4.181403ms) Apr 9 00:19:58.063: INFO: (6) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 4.283099ms) Apr 9 00:19:58.063: INFO: (6) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 4.508214ms) Apr 9 00:19:58.063: INFO: (6) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 4.467348ms) Apr 9 00:19:58.063: INFO: (6) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 4.514105ms) Apr 9 00:19:58.064: INFO: (6) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 5.552995ms) Apr 9 00:19:58.065: INFO: (6) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 5.682982ms) Apr 9 00:19:58.065: INFO: (6) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 5.686554ms) Apr 9 00:19:58.065: INFO: (6) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 5.969683ms) Apr 9 00:19:58.065: INFO: (6) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 5.994279ms) Apr 9 00:19:58.072: INFO: (7) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 6.861351ms) Apr 9 00:19:58.072: INFO: (7) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 6.839822ms) Apr 9 00:19:58.072: INFO: (7) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 6.889398ms) Apr 9 00:19:58.072: INFO: (7) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 6.872905ms) Apr 9 00:19:58.072: INFO: (7) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 6.874325ms) Apr 9 00:19:58.072: INFO: (7) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 6.823882ms) Apr 9 00:19:58.072: INFO: (7) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 7.05652ms) Apr 9 00:19:58.072: INFO: (7) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 7.006629ms) Apr 9 00:19:58.072: INFO: (7) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 7.10948ms) Apr 9 00:19:58.072: INFO: (7) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test<... (200; 3.273779ms) Apr 9 00:19:58.076: INFO: (8) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: ... (200; 4.399818ms) Apr 9 00:19:58.078: INFO: (8) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 4.651092ms) Apr 9 00:19:58.078: INFO: (8) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 4.712209ms) Apr 9 00:19:58.078: INFO: (8) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 4.835648ms) Apr 9 00:19:58.078: INFO: (8) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 4.854824ms) Apr 9 00:19:58.078: INFO: (8) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 4.956202ms) Apr 9 00:19:58.078: INFO: (8) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 4.924933ms) Apr 9 00:19:58.079: INFO: (8) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 5.373609ms) Apr 9 00:19:58.082: INFO: (9) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 3.203956ms) Apr 9 00:19:58.083: INFO: (9) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 4.11378ms) Apr 9 00:19:58.083: INFO: (9) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 4.105309ms) Apr 9 00:19:58.083: INFO: (9) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 4.1982ms) Apr 9 00:19:58.083: INFO: (9) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test<... (200; 4.19658ms) Apr 9 00:19:58.083: INFO: (9) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 4.374134ms) Apr 9 00:19:58.083: INFO: (9) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 4.362232ms) Apr 9 00:19:58.083: INFO: (9) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 4.585631ms) Apr 9 00:19:58.084: INFO: (9) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 5.611371ms) Apr 9 00:19:58.084: INFO: (9) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 5.711854ms) Apr 9 00:19:58.084: INFO: (9) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 5.751853ms) Apr 9 00:19:58.084: INFO: (9) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 5.721047ms) Apr 9 00:19:58.084: INFO: (9) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 5.754263ms) Apr 9 00:19:58.084: INFO: (9) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 5.783797ms) Apr 9 00:19:58.087: INFO: (10) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 2.658074ms) Apr 9 00:19:58.090: INFO: (10) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 5.238136ms) Apr 9 00:19:58.094: INFO: (10) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 8.962832ms) Apr 9 00:19:58.094: INFO: (10) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 9.601898ms) Apr 9 00:19:58.094: INFO: (10) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 9.629706ms) Apr 9 00:19:58.094: INFO: (10) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 9.731252ms) Apr 9 00:19:58.094: INFO: (10) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 9.908035ms) Apr 9 00:19:58.095: INFO: (10) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test<... (200; 10.011065ms) Apr 9 00:19:58.095: INFO: (10) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 10.068541ms) Apr 9 00:19:58.095: INFO: (10) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 10.02526ms) Apr 9 00:19:58.095: INFO: (10) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 10.07005ms) Apr 9 00:19:58.095: INFO: (10) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 10.033362ms) Apr 9 00:19:58.095: INFO: (10) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 10.0307ms) Apr 9 00:19:58.095: INFO: (10) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 10.40672ms) Apr 9 00:19:58.098: INFO: (11) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 3.051798ms) Apr 9 00:19:58.098: INFO: (11) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 3.28068ms) Apr 9 00:19:58.099: INFO: (11) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 3.488384ms) Apr 9 00:19:58.099: INFO: (11) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 3.545478ms) Apr 9 00:19:58.099: INFO: (11) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 3.589257ms) Apr 9 00:19:58.099: INFO: (11) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 3.591116ms) Apr 9 00:19:58.099: INFO: (11) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 3.602905ms) Apr 9 00:19:58.099: INFO: (11) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 3.647195ms) Apr 9 00:19:58.099: INFO: (11) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test<... (200; 3.6513ms) Apr 9 00:19:58.100: INFO: (11) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 4.669729ms) Apr 9 00:19:58.100: INFO: (11) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 4.756112ms) Apr 9 00:19:58.100: INFO: (11) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 5.008614ms) Apr 9 00:19:58.100: INFO: (11) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 5.431248ms) Apr 9 00:19:58.100: INFO: (11) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 5.483568ms) Apr 9 00:19:58.100: INFO: (11) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 5.431565ms) Apr 9 00:19:58.103: INFO: (12) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test<... (200; 2.423016ms) Apr 9 00:19:58.103: INFO: (12) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 2.888739ms) Apr 9 00:19:58.104: INFO: (12) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 2.988581ms) Apr 9 00:19:58.104: INFO: (12) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 3.338983ms) Apr 9 00:19:58.104: INFO: (12) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 3.314639ms) Apr 9 00:19:58.105: INFO: (12) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 4.062012ms) Apr 9 00:19:58.105: INFO: (12) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 4.184622ms) Apr 9 00:19:58.105: INFO: (12) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 4.414589ms) Apr 9 00:19:58.106: INFO: (12) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 5.237834ms) Apr 9 00:19:58.106: INFO: (12) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 5.37117ms) Apr 9 00:19:58.106: INFO: (12) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 5.363089ms) Apr 9 00:19:58.106: INFO: (12) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 5.227977ms) Apr 9 00:19:58.106: INFO: (12) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 5.322454ms) Apr 9 00:19:58.106: INFO: (12) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 5.328815ms) Apr 9 00:19:58.106: INFO: (12) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 5.514547ms) Apr 9 00:19:58.109: INFO: (13) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 2.696813ms) Apr 9 00:19:58.109: INFO: (13) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test (200; 5.334046ms) Apr 9 00:19:58.112: INFO: (13) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 5.421437ms) Apr 9 00:19:58.112: INFO: (13) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 5.456292ms) Apr 9 00:19:58.112: INFO: (13) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 5.621783ms) Apr 9 00:19:58.112: INFO: (13) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 5.517112ms) Apr 9 00:19:58.115: INFO: (14) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 2.841779ms) Apr 9 00:19:58.115: INFO: (14) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 3.151034ms) Apr 9 00:19:58.115: INFO: (14) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 3.144347ms) Apr 9 00:19:58.115: INFO: (14) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 3.22648ms) Apr 9 00:19:58.115: INFO: (14) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: ... (200; 3.274448ms) Apr 9 00:19:58.115: INFO: (14) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 3.281916ms) Apr 9 00:19:58.115: INFO: (14) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 3.338889ms) Apr 9 00:19:58.115: INFO: (14) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 3.357385ms) Apr 9 00:19:58.115: INFO: (14) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 3.49019ms) Apr 9 00:19:58.117: INFO: (14) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 4.710122ms) Apr 9 00:19:58.117: INFO: (14) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 4.582564ms) Apr 9 00:19:58.117: INFO: (14) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 4.669302ms) Apr 9 00:19:58.117: INFO: (14) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 5.025715ms) Apr 9 00:19:58.117: INFO: (14) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 5.106885ms) Apr 9 00:19:58.121: INFO: (15) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 4.030982ms) Apr 9 00:19:58.122: INFO: (15) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 5.040903ms) Apr 9 00:19:58.122: INFO: (15) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 5.129682ms) Apr 9 00:19:58.122: INFO: (15) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 5.406632ms) Apr 9 00:19:58.122: INFO: (15) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 5.383206ms) Apr 9 00:19:58.123: INFO: (15) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 5.47833ms) Apr 9 00:19:58.123: INFO: (15) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 5.806158ms) Apr 9 00:19:58.123: INFO: (15) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test (200; 5.848449ms) Apr 9 00:19:58.123: INFO: (15) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 5.820234ms) Apr 9 00:19:58.123: INFO: (15) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 5.906789ms) Apr 9 00:19:58.123: INFO: (15) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 6.006778ms) Apr 9 00:19:58.123: INFO: (15) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 5.931124ms) Apr 9 00:19:58.123: INFO: (15) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 5.884723ms) Apr 9 00:19:58.123: INFO: (15) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 5.947102ms) Apr 9 00:19:58.123: INFO: (15) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 5.891541ms) Apr 9 00:19:58.126: INFO: (16) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 2.47943ms) Apr 9 00:19:58.127: INFO: (16) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 3.38532ms) Apr 9 00:19:58.127: INFO: (16) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 3.858412ms) Apr 9 00:19:58.127: INFO: (16) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 3.915307ms) Apr 9 00:19:58.127: INFO: (16) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 3.939205ms) Apr 9 00:19:58.127: INFO: (16) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 3.962567ms) Apr 9 00:19:58.127: INFO: (16) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 3.912083ms) Apr 9 00:19:58.127: INFO: (16) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 3.924132ms) Apr 9 00:19:58.127: INFO: (16) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 3.919705ms) Apr 9 00:19:58.128: INFO: (16) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 4.330912ms) Apr 9 00:19:58.128: INFO: (16) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test (200; 3.681916ms) Apr 9 00:19:58.133: INFO: (17) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 3.832301ms) Apr 9 00:19:58.133: INFO: (17) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 4.116802ms) Apr 9 00:19:58.133: INFO: (17) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 4.118327ms) Apr 9 00:19:58.133: INFO: (17) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 4.17921ms) Apr 9 00:19:58.133: INFO: (17) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: ... (200; 4.534843ms) Apr 9 00:19:58.134: INFO: (17) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 4.693912ms) Apr 9 00:19:58.134: INFO: (17) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 4.733856ms) Apr 9 00:19:58.134: INFO: (17) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 4.894597ms) Apr 9 00:19:58.134: INFO: (17) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 4.968073ms) Apr 9 00:19:58.136: INFO: (18) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 2.115894ms) Apr 9 00:19:58.136: INFO: (18) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 2.043349ms) Apr 9 00:19:58.137: INFO: (18) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 2.80924ms) Apr 9 00:19:58.138: INFO: (18) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:1080/proxy/: ... (200; 3.49509ms) Apr 9 00:19:58.139: INFO: (18) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 4.689005ms) Apr 9 00:19:58.139: INFO: (18) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 5.145351ms) Apr 9 00:19:58.139: INFO: (18) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: test<... (200; 5.147019ms) Apr 9 00:19:58.139: INFO: (18) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 5.152129ms) Apr 9 00:19:58.139: INFO: (18) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 5.231519ms) Apr 9 00:19:58.139: INFO: (18) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 5.148656ms) Apr 9 00:19:58.139: INFO: (18) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 5.224129ms) Apr 9 00:19:58.139: INFO: (18) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname2/proxy/: bar (200; 5.199341ms) Apr 9 00:19:58.139: INFO: (18) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 5.395115ms) Apr 9 00:19:58.139: INFO: (18) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 5.405274ms) Apr 9 00:19:58.139: INFO: (18) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 5.479281ms) Apr 9 00:19:58.144: INFO: (19) /api/v1/namespaces/proxy-2606/services/http:proxy-service-xv5hg:portname1/proxy/: foo (200; 4.602595ms) Apr 9 00:19:58.144: INFO: (19) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname1/proxy/: tls baz (200; 4.748833ms) Apr 9 00:19:58.144: INFO: (19) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:462/proxy/: tls qux (200; 4.842955ms) Apr 9 00:19:58.145: INFO: (19) /api/v1/namespaces/proxy-2606/services/https:proxy-service-xv5hg:tlsportname2/proxy/: tls qux (200; 4.986373ms) Apr 9 00:19:58.145: INFO: (19) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:1080/proxy/: test<... (200; 5.37164ms) Apr 9 00:19:58.145: INFO: (19) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s/proxy/: test (200; 5.349035ms) Apr 9 00:19:58.145: INFO: (19) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname2/proxy/: bar (200; 5.435767ms) Apr 9 00:19:58.145: INFO: (19) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 5.376742ms) Apr 9 00:19:58.145: INFO: (19) /api/v1/namespaces/proxy-2606/pods/proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 5.469361ms) Apr 9 00:19:58.145: INFO: (19) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:460/proxy/: tls baz (200; 5.37316ms) Apr 9 00:19:58.145: INFO: (19) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:162/proxy/: bar (200; 5.649457ms) Apr 9 00:19:58.145: INFO: (19) /api/v1/namespaces/proxy-2606/pods/http:proxy-service-xv5hg-ljt4s:160/proxy/: foo (200; 5.779854ms) Apr 9 00:19:58.145: INFO: (19) /api/v1/namespaces/proxy-2606/services/proxy-service-xv5hg:portname1/proxy/: foo (200; 5.756476ms) Apr 9 00:19:58.145: INFO: (19) /api/v1/namespaces/proxy-2606/pods/https:proxy-service-xv5hg-ljt4s:443/proxy/: ... (200; 5.942063ms) STEP: deleting ReplicationController proxy-service-xv5hg in namespace proxy-2606, will wait for the garbage collector to delete the pods Apr 9 00:19:58.205: INFO: Deleting ReplicationController proxy-service-xv5hg took: 7.064736ms Apr 9 00:19:58.506: INFO: Terminating ReplicationController proxy-service-xv5hg pods took: 301.009686ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:20:01.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2606" for this suite. • [SLOW TEST:15.395 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":166,"skipped":3069,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:20:01.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 9 00:20:01.258: INFO: >>> kubeConfig: /root/.kube/config Apr 9 00:20:04.170: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:20:14.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4694" for this suite. • [SLOW TEST:13.492 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":167,"skipped":3075,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:20:14.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-9f72f70d-3b23-4e5e-8c3f-5b7380333538 STEP: Creating a pod to test consume configMaps Apr 9 00:20:14.783: INFO: Waiting up to 5m0s for pod "pod-configmaps-f2dbcae3-b993-482c-ae4d-b452955871a0" in namespace "configmap-9083" to be "Succeeded or Failed" Apr 9 00:20:14.787: INFO: Pod "pod-configmaps-f2dbcae3-b993-482c-ae4d-b452955871a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.664503ms Apr 9 00:20:16.791: INFO: Pod "pod-configmaps-f2dbcae3-b993-482c-ae4d-b452955871a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007803536s Apr 9 00:20:18.795: INFO: Pod "pod-configmaps-f2dbcae3-b993-482c-ae4d-b452955871a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012246282s STEP: Saw pod success Apr 9 00:20:18.795: INFO: Pod "pod-configmaps-f2dbcae3-b993-482c-ae4d-b452955871a0" satisfied condition "Succeeded or Failed" Apr 9 00:20:18.799: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f2dbcae3-b993-482c-ae4d-b452955871a0 container configmap-volume-test: STEP: delete the pod Apr 9 00:20:18.830: INFO: Waiting for pod pod-configmaps-f2dbcae3-b993-482c-ae4d-b452955871a0 to disappear Apr 9 00:20:18.834: INFO: Pod pod-configmaps-f2dbcae3-b993-482c-ae4d-b452955871a0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:20:18.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9083" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":3097,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:20:18.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:20:18.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9691a0d2-aa06-4632-8d7d-c8bf6056b821" in namespace "projected-6951" to be "Succeeded or Failed" Apr 9 00:20:18.967: INFO: Pod "downwardapi-volume-9691a0d2-aa06-4632-8d7d-c8bf6056b821": Phase="Pending", Reason="", readiness=false. Elapsed: 5.454311ms Apr 9 00:20:21.037: INFO: Pod "downwardapi-volume-9691a0d2-aa06-4632-8d7d-c8bf6056b821": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075790086s Apr 9 00:20:23.041: INFO: Pod "downwardapi-volume-9691a0d2-aa06-4632-8d7d-c8bf6056b821": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079896595s STEP: Saw pod success Apr 9 00:20:23.041: INFO: Pod "downwardapi-volume-9691a0d2-aa06-4632-8d7d-c8bf6056b821" satisfied condition "Succeeded or Failed" Apr 9 00:20:23.044: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9691a0d2-aa06-4632-8d7d-c8bf6056b821 container client-container: STEP: delete the pod Apr 9 00:20:23.076: INFO: Waiting for pod downwardapi-volume-9691a0d2-aa06-4632-8d7d-c8bf6056b821 to disappear Apr 9 00:20:23.102: INFO: Pod downwardapi-volume-9691a0d2-aa06-4632-8d7d-c8bf6056b821 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:20:23.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6951" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":3105,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:20:23.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:20:23.291: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 9 00:20:26.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1418 create -f -' Apr 9 00:20:29.207: INFO: stderr: "" Apr 9 00:20:29.207: INFO: stdout: "e2e-test-crd-publish-openapi-6090-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 9 00:20:29.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1418 delete e2e-test-crd-publish-openapi-6090-crds test-foo' Apr 9 00:20:29.345: INFO: stderr: "" Apr 9 00:20:29.345: INFO: stdout: "e2e-test-crd-publish-openapi-6090-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 9 00:20:29.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1418 apply -f -' Apr 9 00:20:29.588: INFO: stderr: "" Apr 9 00:20:29.588: INFO: stdout: "e2e-test-crd-publish-openapi-6090-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 9 00:20:29.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1418 delete e2e-test-crd-publish-openapi-6090-crds test-foo' Apr 9 00:20:29.693: INFO: stderr: "" Apr 9 00:20:29.693: INFO: stdout: "e2e-test-crd-publish-openapi-6090-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 9 00:20:29.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1418 create -f -' Apr 9 00:20:29.966: INFO: rc: 1 Apr 9 00:20:29.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1418 apply -f -' Apr 9 00:20:30.187: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 9 00:20:30.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1418 create -f -' Apr 9 00:20:30.433: INFO: rc: 1 Apr 9 00:20:30.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1418 apply -f -' Apr 9 00:20:30.667: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 9 00:20:30.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6090-crds' Apr 9 00:20:30.904: INFO: stderr: "" Apr 9 00:20:30.904: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6090-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 9 00:20:30.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6090-crds.metadata' Apr 9 00:20:31.122: INFO: stderr: "" Apr 9 00:20:31.122: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6090-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 9 00:20:31.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6090-crds.spec' Apr 9 00:20:31.373: INFO: stderr: "" Apr 9 00:20:31.373: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6090-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 9 00:20:31.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6090-crds.spec.bars' Apr 9 00:20:31.645: INFO: stderr: "" Apr 9 00:20:31.645: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6090-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 9 00:20:31.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6090-crds.spec.bars2' Apr 9 00:20:31.860: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:20:34.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1418" for this suite. • [SLOW TEST:11.629 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":170,"skipped":3115,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:20:34.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:20:35.523: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:20:37.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988435, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988435, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988435, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988435, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:20:40.621: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:20:40.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9210" for this suite. STEP: Destroying namespace "webhook-9210-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.132 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":171,"skipped":3123,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:20:40.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:20:40.943: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:20:45.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2063" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":3129,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:20:45.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 9 00:20:45.134: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 9 00:20:45.154: INFO: Waiting for terminating namespaces to be deleted... Apr 9 00:20:45.157: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 9 00:20:45.194: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 9 00:20:45.194: INFO: Container kube-proxy ready: true, restart count 0 Apr 9 00:20:45.194: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 9 00:20:45.194: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 00:20:45.194: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 9 00:20:45.200: INFO: pod-exec-websocket-ec08c72c-6996-4fca-9151-da31eefbb89a from pods-2063 started at 2020-04-09 00:20:41 +0000 UTC (1 container statuses recorded) Apr 9 00:20:45.200: INFO: Container main ready: true, restart count 0 Apr 9 00:20:45.200: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 9 00:20:45.200: INFO: Container kindnet-cni ready: true, restart count 0 Apr 9 00:20:45.200: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 9 00:20:45.200: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-35b86d91-fe11-408c-a010-52927fa25a3b 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-35b86d91-fe11-408c-a010-52927fa25a3b off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-35b86d91-fe11-408c-a010-52927fa25a3b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:21:01.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-138" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.282 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":173,"skipped":3151,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:21:01.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-2367f34e-f7f3-4569-bfb9-0e2ec963354c STEP: Creating configMap with name cm-test-opt-upd-e7b1e46b-6479-4d80-88af-05a037bf1e4d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2367f34e-f7f3-4569-bfb9-0e2ec963354c STEP: Updating configmap cm-test-opt-upd-e7b1e46b-6479-4d80-88af-05a037bf1e4d STEP: Creating configMap with name cm-test-opt-create-2d90e56c-b538-46d1-b3df-5e29bc1e1001 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:21:09.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1137" for this suite. • [SLOW TEST:8.244 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":3154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:21:09.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:21:09.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1835" for this suite. STEP: Destroying namespace "nspatchtest-906710b7-9dc6-4235-bb54-4b87a9542c20-7325" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":175,"skipped":3211,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:21:09.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:21:09.826: INFO: Create a RollingUpdate DaemonSet Apr 9 00:21:09.829: INFO: Check that daemon pods launch on every node of the cluster Apr 9 00:21:09.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:21:09.878: INFO: Number of nodes with available pods: 0 Apr 9 00:21:09.879: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:21:10.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:21:10.885: INFO: Number of nodes with available pods: 0 Apr 9 00:21:10.885: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:21:11.885: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:21:11.889: INFO: Number of nodes with available pods: 0 Apr 9 00:21:11.889: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:21:12.884: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:21:12.888: INFO: Number of nodes with available pods: 1 Apr 9 00:21:12.888: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:21:13.918: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:21:13.922: INFO: Number of nodes with available pods: 2 Apr 9 00:21:13.922: INFO: Number of running nodes: 2, number of available pods: 2 Apr 9 00:21:13.922: INFO: Update the DaemonSet to trigger a rollout Apr 9 00:21:13.930: INFO: Updating DaemonSet daemon-set Apr 9 00:21:22.949: INFO: Roll back the DaemonSet before rollout is complete Apr 9 00:21:22.955: INFO: Updating DaemonSet daemon-set Apr 9 00:21:22.955: INFO: Make sure DaemonSet rollback is complete Apr 9 00:21:22.960: INFO: Wrong image for pod: daemon-set-9sxtf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 9 00:21:22.960: INFO: Pod daemon-set-9sxtf is not available Apr 9 00:21:22.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:21:23.988: INFO: Wrong image for pod: daemon-set-9sxtf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 9 00:21:23.988: INFO: Pod daemon-set-9sxtf is not available Apr 9 00:21:23.992: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:21:24.988: INFO: Wrong image for pod: daemon-set-9sxtf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 9 00:21:24.988: INFO: Pod daemon-set-9sxtf is not available Apr 9 00:21:24.993: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:21:25.988: INFO: Wrong image for pod: daemon-set-9sxtf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 9 00:21:25.988: INFO: Pod daemon-set-9sxtf is not available Apr 9 00:21:25.993: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:21:27.092: INFO: Wrong image for pod: daemon-set-9sxtf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 9 00:21:27.092: INFO: Pod daemon-set-9sxtf is not available Apr 9 00:21:27.096: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:21:27.986: INFO: Pod daemon-set-nvlb5 is not available Apr 9 00:21:27.990: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5262, will wait for the garbage collector to delete the pods Apr 9 00:21:28.053: INFO: Deleting DaemonSet.extensions daemon-set took: 5.403472ms Apr 9 00:21:28.353: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.182072ms Apr 9 00:21:33.057: INFO: Number of nodes with available pods: 0 Apr 9 00:21:33.057: INFO: Number of running nodes: 0, number of available pods: 0 Apr 9 00:21:33.060: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5262/daemonsets","resourceVersion":"6548095"},"items":null} Apr 9 00:21:33.062: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5262/pods","resourceVersion":"6548095"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:21:33.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5262" for this suite. • [SLOW TEST:23.334 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":176,"skipped":3217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:21:33.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-81d64a8c-0b97-48ef-a7f5-08e0494335bd STEP: Creating secret with name s-test-opt-upd-7e2c20a7-43e8-4b2e-bc86-e23928c042c9 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-81d64a8c-0b97-48ef-a7f5-08e0494335bd STEP: Updating secret s-test-opt-upd-7e2c20a7-43e8-4b2e-bc86-e23928c042c9 STEP: Creating secret with name s-test-opt-create-54d2eb60-03da-48ea-9e9e-5211cb392a25 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:22:59.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7979" for this suite. • [SLOW TEST:86.612 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":3254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:22:59.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7860 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7860 STEP: creating replication controller externalsvc in namespace services-7860 I0409 00:22:59.902410 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7860, replica count: 2 I0409 00:23:02.952794 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 00:23:05.953016 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 9 00:23:06.109: INFO: Creating new exec pod Apr 9 00:23:10.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7860 execpod747pg -- /bin/sh -x -c nslookup nodeport-service' Apr 9 00:23:10.355: INFO: stderr: "I0409 00:23:10.255243 2198 log.go:172] (0xc00003af20) (0xc000811b80) Create stream\nI0409 00:23:10.255302 2198 log.go:172] (0xc00003af20) (0xc000811b80) Stream added, broadcasting: 1\nI0409 00:23:10.259942 2198 log.go:172] (0xc00003af20) Reply frame received for 1\nI0409 00:23:10.259977 2198 log.go:172] (0xc00003af20) (0xc0007db720) Create stream\nI0409 00:23:10.259986 2198 log.go:172] (0xc00003af20) (0xc0007db720) Stream added, broadcasting: 3\nI0409 00:23:10.260914 2198 log.go:172] (0xc00003af20) Reply frame received for 3\nI0409 00:23:10.260953 2198 log.go:172] (0xc00003af20) (0xc00056cb40) Create stream\nI0409 00:23:10.260964 2198 log.go:172] (0xc00003af20) (0xc00056cb40) Stream added, broadcasting: 5\nI0409 00:23:10.261910 2198 log.go:172] (0xc00003af20) Reply frame received for 5\nI0409 00:23:10.341451 2198 log.go:172] (0xc00003af20) Data frame received for 5\nI0409 00:23:10.341491 2198 log.go:172] (0xc00056cb40) (5) Data frame handling\nI0409 00:23:10.341513 2198 log.go:172] (0xc00056cb40) (5) Data frame sent\n+ nslookup nodeport-service\nI0409 00:23:10.347674 2198 log.go:172] (0xc00003af20) Data frame received for 3\nI0409 00:23:10.347707 2198 log.go:172] (0xc0007db720) (3) Data frame handling\nI0409 00:23:10.347731 2198 log.go:172] (0xc0007db720) (3) Data frame sent\nI0409 00:23:10.348753 2198 log.go:172] (0xc00003af20) Data frame received for 3\nI0409 00:23:10.348775 2198 log.go:172] (0xc0007db720) (3) Data frame handling\nI0409 00:23:10.348794 2198 log.go:172] (0xc0007db720) (3) Data frame sent\nI0409 00:23:10.349678 2198 log.go:172] (0xc00003af20) Data frame received for 3\nI0409 00:23:10.349708 2198 log.go:172] (0xc0007db720) (3) Data frame handling\nI0409 00:23:10.349731 2198 log.go:172] (0xc00003af20) Data frame received for 5\nI0409 00:23:10.349742 2198 log.go:172] (0xc00056cb40) (5) Data frame handling\nI0409 00:23:10.350994 2198 log.go:172] (0xc00003af20) Data frame received for 1\nI0409 00:23:10.351008 2198 log.go:172] (0xc000811b80) (1) Data frame handling\nI0409 00:23:10.351020 2198 log.go:172] (0xc000811b80) (1) Data frame sent\nI0409 00:23:10.351031 2198 log.go:172] (0xc00003af20) (0xc000811b80) Stream removed, broadcasting: 1\nI0409 00:23:10.351048 2198 log.go:172] (0xc00003af20) Go away received\nI0409 00:23:10.351356 2198 log.go:172] (0xc00003af20) (0xc000811b80) Stream removed, broadcasting: 1\nI0409 00:23:10.351373 2198 log.go:172] (0xc00003af20) (0xc0007db720) Stream removed, broadcasting: 3\nI0409 00:23:10.351381 2198 log.go:172] (0xc00003af20) (0xc00056cb40) Stream removed, broadcasting: 5\n" Apr 9 00:23:10.355: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7860.svc.cluster.local\tcanonical name = externalsvc.services-7860.svc.cluster.local.\nName:\texternalsvc.services-7860.svc.cluster.local\nAddress: 10.96.132.81\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7860, will wait for the garbage collector to delete the pods Apr 9 00:23:10.412: INFO: Deleting ReplicationController externalsvc took: 4.984076ms Apr 9 00:23:10.712: INFO: Terminating ReplicationController externalsvc pods took: 300.251115ms Apr 9 00:23:23.053: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:23:23.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7860" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:23.371 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":178,"skipped":3288,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:23:23.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3935 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3935 STEP: creating replication controller externalsvc in namespace services-3935 I0409 00:23:23.239662 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3935, replica count: 2 I0409 00:23:26.290176 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 00:23:29.290416 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 9 00:23:29.319: INFO: Creating new exec pod Apr 9 00:23:33.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3935 execpod5st9q -- /bin/sh -x -c nslookup clusterip-service' Apr 9 00:23:33.616: INFO: stderr: "I0409 00:23:33.522774 2217 log.go:172] (0xc000b529a0) (0xc0006654a0) Create stream\nI0409 00:23:33.522834 2217 log.go:172] (0xc000b529a0) (0xc0006654a0) Stream added, broadcasting: 1\nI0409 00:23:33.527409 2217 log.go:172] (0xc000b529a0) Reply frame received for 1\nI0409 00:23:33.527465 2217 log.go:172] (0xc000b529a0) (0xc0003e4960) Create stream\nI0409 00:23:33.527479 2217 log.go:172] (0xc000b529a0) (0xc0003e4960) Stream added, broadcasting: 3\nI0409 00:23:33.529083 2217 log.go:172] (0xc000b529a0) Reply frame received for 3\nI0409 00:23:33.529277 2217 log.go:172] (0xc000b529a0) (0xc0003e4a00) Create stream\nI0409 00:23:33.529319 2217 log.go:172] (0xc000b529a0) (0xc0003e4a00) Stream added, broadcasting: 5\nI0409 00:23:33.534705 2217 log.go:172] (0xc000b529a0) Reply frame received for 5\nI0409 00:23:33.600309 2217 log.go:172] (0xc000b529a0) Data frame received for 5\nI0409 00:23:33.600346 2217 log.go:172] (0xc0003e4a00) (5) Data frame handling\nI0409 00:23:33.600367 2217 log.go:172] (0xc0003e4a00) (5) Data frame sent\n+ nslookup clusterip-service\nI0409 00:23:33.608615 2217 log.go:172] (0xc000b529a0) Data frame received for 3\nI0409 00:23:33.608649 2217 log.go:172] (0xc0003e4960) (3) Data frame handling\nI0409 00:23:33.608677 2217 log.go:172] (0xc0003e4960) (3) Data frame sent\nI0409 00:23:33.609819 2217 log.go:172] (0xc000b529a0) Data frame received for 3\nI0409 00:23:33.609854 2217 log.go:172] (0xc0003e4960) (3) Data frame handling\nI0409 00:23:33.609889 2217 log.go:172] (0xc0003e4960) (3) Data frame sent\nI0409 00:23:33.610099 2217 log.go:172] (0xc000b529a0) Data frame received for 5\nI0409 00:23:33.610127 2217 log.go:172] (0xc0003e4a00) (5) Data frame handling\nI0409 00:23:33.610148 2217 log.go:172] (0xc000b529a0) Data frame received for 3\nI0409 00:23:33.610158 2217 log.go:172] (0xc0003e4960) (3) Data frame handling\nI0409 00:23:33.611656 2217 log.go:172] (0xc000b529a0) Data frame received for 1\nI0409 00:23:33.611692 2217 log.go:172] (0xc0006654a0) (1) Data frame handling\nI0409 00:23:33.611711 2217 log.go:172] (0xc0006654a0) (1) Data frame sent\nI0409 00:23:33.611750 2217 log.go:172] (0xc000b529a0) (0xc0006654a0) Stream removed, broadcasting: 1\nI0409 00:23:33.611778 2217 log.go:172] (0xc000b529a0) Go away received\nI0409 00:23:33.612213 2217 log.go:172] (0xc000b529a0) (0xc0006654a0) Stream removed, broadcasting: 1\nI0409 00:23:33.612238 2217 log.go:172] (0xc000b529a0) (0xc0003e4960) Stream removed, broadcasting: 3\nI0409 00:23:33.612251 2217 log.go:172] (0xc000b529a0) (0xc0003e4a00) Stream removed, broadcasting: 5\n" Apr 9 00:23:33.616: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3935.svc.cluster.local\tcanonical name = externalsvc.services-3935.svc.cluster.local.\nName:\texternalsvc.services-3935.svc.cluster.local\nAddress: 10.96.175.77\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3935, will wait for the garbage collector to delete the pods Apr 9 00:23:33.676: INFO: Deleting ReplicationController externalsvc took: 6.670669ms Apr 9 00:23:33.976: INFO: Terminating ReplicationController externalsvc pods took: 300.258676ms Apr 9 00:23:43.090: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:23:43.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3935" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:20.089 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":179,"skipped":3289,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:23:43.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:23:43.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65053f9d-d083-49e4-8637-f5bb231dd3ab" in namespace "downward-api-9679" to be "Succeeded or Failed" Apr 9 00:23:43.239: INFO: Pod "downwardapi-volume-65053f9d-d083-49e4-8637-f5bb231dd3ab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.603076ms Apr 9 00:23:45.248: INFO: Pod "downwardapi-volume-65053f9d-d083-49e4-8637-f5bb231dd3ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012801988s Apr 9 00:23:47.254: INFO: Pod "downwardapi-volume-65053f9d-d083-49e4-8637-f5bb231dd3ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019152178s STEP: Saw pod success Apr 9 00:23:47.254: INFO: Pod "downwardapi-volume-65053f9d-d083-49e4-8637-f5bb231dd3ab" satisfied condition "Succeeded or Failed" Apr 9 00:23:47.257: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-65053f9d-d083-49e4-8637-f5bb231dd3ab container client-container: STEP: delete the pod Apr 9 00:23:47.276: INFO: Waiting for pod downwardapi-volume-65053f9d-d083-49e4-8637-f5bb231dd3ab to disappear Apr 9 00:23:47.281: INFO: Pod downwardapi-volume-65053f9d-d083-49e4-8637-f5bb231dd3ab no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:23:47.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9679" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3304,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:23:47.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-dfb4e73e-a2f9-4e79-ba51-a9814396882b [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:23:47.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9075" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":181,"skipped":3306,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:23:47.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-521 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-521 Apr 9 00:23:47.477: INFO: Found 0 stateful pods, waiting for 1 Apr 9 00:23:57.482: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 9 00:23:57.526: INFO: Deleting all statefulset in ns statefulset-521 Apr 9 00:23:57.555: INFO: Scaling statefulset ss to 0 Apr 9 00:24:17.627: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 00:24:17.631: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:24:17.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-521" for this suite. • [SLOW TEST:30.311 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":182,"skipped":3310,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:24:17.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:24:17.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1704" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":183,"skipped":3317,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:24:17.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 9 00:24:17.793: INFO: Waiting up to 5m0s for pod "pod-26431081-a86a-47ff-ad53-17a8c4978748" in namespace "emptydir-26" to be "Succeeded or Failed" Apr 9 00:24:17.804: INFO: Pod "pod-26431081-a86a-47ff-ad53-17a8c4978748": Phase="Pending", Reason="", readiness=false. Elapsed: 11.052078ms Apr 9 00:24:19.833: INFO: Pod "pod-26431081-a86a-47ff-ad53-17a8c4978748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040729278s Apr 9 00:24:21.842: INFO: Pod "pod-26431081-a86a-47ff-ad53-17a8c4978748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049384713s STEP: Saw pod success Apr 9 00:24:21.842: INFO: Pod "pod-26431081-a86a-47ff-ad53-17a8c4978748" satisfied condition "Succeeded or Failed" Apr 9 00:24:21.845: INFO: Trying to get logs from node latest-worker pod pod-26431081-a86a-47ff-ad53-17a8c4978748 container test-container: STEP: delete the pod Apr 9 00:24:21.918: INFO: Waiting for pod pod-26431081-a86a-47ff-ad53-17a8c4978748 to disappear Apr 9 00:24:21.929: INFO: Pod pod-26431081-a86a-47ff-ad53-17a8c4978748 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:24:21.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-26" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3379,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:24:21.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 9 00:24:22.122: INFO: Waiting up to 5m0s for pod "downward-api-68bbc1f2-9723-401c-9f2e-69fc10e90c67" in namespace "downward-api-6071" to be "Succeeded or Failed" Apr 9 00:24:22.169: INFO: Pod "downward-api-68bbc1f2-9723-401c-9f2e-69fc10e90c67": Phase="Pending", Reason="", readiness=false. Elapsed: 47.330634ms Apr 9 00:24:24.173: INFO: Pod "downward-api-68bbc1f2-9723-401c-9f2e-69fc10e90c67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0513387s Apr 9 00:24:26.177: INFO: Pod "downward-api-68bbc1f2-9723-401c-9f2e-69fc10e90c67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055069205s STEP: Saw pod success Apr 9 00:24:26.177: INFO: Pod "downward-api-68bbc1f2-9723-401c-9f2e-69fc10e90c67" satisfied condition "Succeeded or Failed" Apr 9 00:24:26.180: INFO: Trying to get logs from node latest-worker pod downward-api-68bbc1f2-9723-401c-9f2e-69fc10e90c67 container dapi-container: STEP: delete the pod Apr 9 00:24:26.194: INFO: Waiting for pod downward-api-68bbc1f2-9723-401c-9f2e-69fc10e90c67 to disappear Apr 9 00:24:26.222: INFO: Pod downward-api-68bbc1f2-9723-401c-9f2e-69fc10e90c67 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:24:26.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6071" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3387,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:24:26.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-ca1f83f3-38fb-445e-8766-e5e97234c14c STEP: Creating configMap with name cm-test-opt-upd-b18b3da6-a53c-4e65-a20f-553d4bed173c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ca1f83f3-38fb-445e-8766-e5e97234c14c STEP: Updating configmap cm-test-opt-upd-b18b3da6-a53c-4e65-a20f-553d4bed173c STEP: Creating configMap with name cm-test-opt-create-977c394d-04ea-4765-a287-3120a4f95373 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:25:46.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7259" for this suite. • [SLOW TEST:80.648 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3391,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:25:46.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:25:46.987: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16128baf-627d-4046-a62b-4b740fbd33db" in namespace "projected-1028" to be "Succeeded or Failed" Apr 9 00:25:47.041: INFO: Pod "downwardapi-volume-16128baf-627d-4046-a62b-4b740fbd33db": Phase="Pending", Reason="", readiness=false. Elapsed: 54.720605ms Apr 9 00:25:49.045: INFO: Pod "downwardapi-volume-16128baf-627d-4046-a62b-4b740fbd33db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058507372s Apr 9 00:25:51.050: INFO: Pod "downwardapi-volume-16128baf-627d-4046-a62b-4b740fbd33db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062943002s STEP: Saw pod success Apr 9 00:25:51.050: INFO: Pod "downwardapi-volume-16128baf-627d-4046-a62b-4b740fbd33db" satisfied condition "Succeeded or Failed" Apr 9 00:25:51.053: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-16128baf-627d-4046-a62b-4b740fbd33db container client-container: STEP: delete the pod Apr 9 00:25:51.086: INFO: Waiting for pod downwardapi-volume-16128baf-627d-4046-a62b-4b740fbd33db to disappear Apr 9 00:25:51.092: INFO: Pod downwardapi-volume-16128baf-627d-4046-a62b-4b740fbd33db no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:25:51.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1028" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3398,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:25:51.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-446de168-6782-4ed8-895c-7bd64d9037e1 STEP: Creating a pod to test consume secrets Apr 9 00:25:51.265: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-449246ed-2d6b-4b46-8df2-8be8af8eec5d" in namespace "projected-8013" to be "Succeeded or Failed" Apr 9 00:25:51.272: INFO: Pod "pod-projected-secrets-449246ed-2d6b-4b46-8df2-8be8af8eec5d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.911421ms Apr 9 00:25:53.394: INFO: Pod "pod-projected-secrets-449246ed-2d6b-4b46-8df2-8be8af8eec5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129720157s Apr 9 00:25:55.399: INFO: Pod "pod-projected-secrets-449246ed-2d6b-4b46-8df2-8be8af8eec5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13439824s STEP: Saw pod success Apr 9 00:25:55.399: INFO: Pod "pod-projected-secrets-449246ed-2d6b-4b46-8df2-8be8af8eec5d" satisfied condition "Succeeded or Failed" Apr 9 00:25:55.402: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-449246ed-2d6b-4b46-8df2-8be8af8eec5d container projected-secret-volume-test: STEP: delete the pod Apr 9 00:25:55.593: INFO: Waiting for pod pod-projected-secrets-449246ed-2d6b-4b46-8df2-8be8af8eec5d to disappear Apr 9 00:25:55.607: INFO: Pod pod-projected-secrets-449246ed-2d6b-4b46-8df2-8be8af8eec5d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:25:55.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8013" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3436,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:25:55.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 9 00:25:55.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 9 00:25:55.739: INFO: stderr: "" Apr 9 00:25:55.739: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:25:55.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8115" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":189,"skipped":3445,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:25:55.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:26:08.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4414" for this suite. • [SLOW TEST:13.197 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":190,"skipped":3456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:26:08.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 9 00:26:19.118: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8687 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:26:19.118: INFO: >>> kubeConfig: /root/.kube/config I0409 00:26:19.157013 7 log.go:172] (0xc0044d24d0) (0xc0013f80a0) Create stream I0409 00:26:19.157048 7 log.go:172] (0xc0044d24d0) (0xc0013f80a0) Stream added, broadcasting: 1 I0409 00:26:19.159735 7 log.go:172] (0xc0044d24d0) Reply frame received for 1 I0409 00:26:19.159765 7 log.go:172] (0xc0044d24d0) (0xc001bca780) Create stream I0409 00:26:19.159774 7 log.go:172] (0xc0044d24d0) (0xc001bca780) Stream added, broadcasting: 3 I0409 00:26:19.160618 7 log.go:172] (0xc0044d24d0) Reply frame received for 3 I0409 00:26:19.160656 7 log.go:172] (0xc0044d24d0) (0xc001bca960) Create stream I0409 00:26:19.160663 7 log.go:172] (0xc0044d24d0) (0xc001bca960) Stream added, broadcasting: 5 I0409 00:26:19.161865 7 log.go:172] (0xc0044d24d0) Reply frame received for 5 I0409 00:26:19.234023 7 log.go:172] (0xc0044d24d0) Data frame received for 5 I0409 00:26:19.234060 7 log.go:172] (0xc001bca960) (5) Data frame handling I0409 00:26:19.234088 7 log.go:172] (0xc0044d24d0) Data frame received for 3 I0409 00:26:19.234102 7 log.go:172] (0xc001bca780) (3) Data frame handling I0409 00:26:19.234116 7 log.go:172] (0xc001bca780) (3) Data frame sent I0409 00:26:19.234128 7 log.go:172] (0xc0044d24d0) Data frame received for 3 I0409 00:26:19.234139 7 log.go:172] (0xc001bca780) (3) Data frame handling I0409 00:26:19.235666 7 log.go:172] (0xc0044d24d0) Data frame received for 1 I0409 00:26:19.235707 7 log.go:172] (0xc0013f80a0) (1) Data frame handling I0409 00:26:19.235731 7 log.go:172] (0xc0013f80a0) (1) Data frame sent I0409 00:26:19.235756 7 log.go:172] (0xc0044d24d0) (0xc0013f80a0) Stream removed, broadcasting: 1 I0409 00:26:19.235774 7 log.go:172] (0xc0044d24d0) Go away received I0409 00:26:19.236023 7 log.go:172] (0xc0044d24d0) (0xc0013f80a0) Stream removed, broadcasting: 1 I0409 00:26:19.236081 7 log.go:172] (0xc0044d24d0) (0xc001bca780) Stream removed, broadcasting: 3 I0409 00:26:19.236099 7 log.go:172] (0xc0044d24d0) (0xc001bca960) Stream removed, broadcasting: 5 Apr 9 00:26:19.236: INFO: Exec stderr: "" Apr 9 00:26:19.236: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8687 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:26:19.236: INFO: >>> kubeConfig: /root/.kube/config I0409 00:26:19.266691 7 log.go:172] (0xc00299e370) (0xc0012b03c0) Create stream I0409 00:26:19.266712 7 log.go:172] (0xc00299e370) (0xc0012b03c0) Stream added, broadcasting: 1 I0409 00:26:19.269632 7 log.go:172] (0xc00299e370) Reply frame received for 1 I0409 00:26:19.269684 7 log.go:172] (0xc00299e370) (0xc001a78960) Create stream I0409 00:26:19.269695 7 log.go:172] (0xc00299e370) (0xc001a78960) Stream added, broadcasting: 3 I0409 00:26:19.270793 7 log.go:172] (0xc00299e370) Reply frame received for 3 I0409 00:26:19.270823 7 log.go:172] (0xc00299e370) (0xc0013f81e0) Create stream I0409 00:26:19.270834 7 log.go:172] (0xc00299e370) (0xc0013f81e0) Stream added, broadcasting: 5 I0409 00:26:19.271713 7 log.go:172] (0xc00299e370) Reply frame received for 5 I0409 00:26:19.321800 7 log.go:172] (0xc00299e370) Data frame received for 5 I0409 00:26:19.321840 7 log.go:172] (0xc0013f81e0) (5) Data frame handling I0409 00:26:19.321876 7 log.go:172] (0xc00299e370) Data frame received for 3 I0409 00:26:19.321894 7 log.go:172] (0xc001a78960) (3) Data frame handling I0409 00:26:19.321918 7 log.go:172] (0xc001a78960) (3) Data frame sent I0409 00:26:19.321948 7 log.go:172] (0xc00299e370) Data frame received for 3 I0409 00:26:19.321966 7 log.go:172] (0xc001a78960) (3) Data frame handling I0409 00:26:19.323211 7 log.go:172] (0xc00299e370) Data frame received for 1 I0409 00:26:19.323234 7 log.go:172] (0xc0012b03c0) (1) Data frame handling I0409 00:26:19.323245 7 log.go:172] (0xc0012b03c0) (1) Data frame sent I0409 00:26:19.323257 7 log.go:172] (0xc00299e370) (0xc0012b03c0) Stream removed, broadcasting: 1 I0409 00:26:19.323267 7 log.go:172] (0xc00299e370) Go away received I0409 00:26:19.323386 7 log.go:172] (0xc00299e370) (0xc0012b03c0) Stream removed, broadcasting: 1 I0409 00:26:19.323409 7 log.go:172] (0xc00299e370) (0xc001a78960) Stream removed, broadcasting: 3 I0409 00:26:19.323426 7 log.go:172] (0xc00299e370) (0xc0013f81e0) Stream removed, broadcasting: 5 Apr 9 00:26:19.323: INFO: Exec stderr: "" Apr 9 00:26:19.323: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8687 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:26:19.323: INFO: >>> kubeConfig: /root/.kube/config I0409 00:26:19.358878 7 log.go:172] (0xc0044d2b00) (0xc0013f8460) Create stream I0409 00:26:19.358905 7 log.go:172] (0xc0044d2b00) (0xc0013f8460) Stream added, broadcasting: 1 I0409 00:26:19.362095 7 log.go:172] (0xc0044d2b00) Reply frame received for 1 I0409 00:26:19.362144 7 log.go:172] (0xc0044d2b00) (0xc0012b0500) Create stream I0409 00:26:19.362170 7 log.go:172] (0xc0044d2b00) (0xc0012b0500) Stream added, broadcasting: 3 I0409 00:26:19.363341 7 log.go:172] (0xc0044d2b00) Reply frame received for 3 I0409 00:26:19.363402 7 log.go:172] (0xc0044d2b00) (0xc001bcaaa0) Create stream I0409 00:26:19.363419 7 log.go:172] (0xc0044d2b00) (0xc001bcaaa0) Stream added, broadcasting: 5 I0409 00:26:19.365486 7 log.go:172] (0xc0044d2b00) Reply frame received for 5 I0409 00:26:19.429945 7 log.go:172] (0xc0044d2b00) Data frame received for 5 I0409 00:26:19.429983 7 log.go:172] (0xc001bcaaa0) (5) Data frame handling I0409 00:26:19.430004 7 log.go:172] (0xc0044d2b00) Data frame received for 3 I0409 00:26:19.430014 7 log.go:172] (0xc0012b0500) (3) Data frame handling I0409 00:26:19.430027 7 log.go:172] (0xc0012b0500) (3) Data frame sent I0409 00:26:19.430037 7 log.go:172] (0xc0044d2b00) Data frame received for 3 I0409 00:26:19.430044 7 log.go:172] (0xc0012b0500) (3) Data frame handling I0409 00:26:19.431104 7 log.go:172] (0xc0044d2b00) Data frame received for 1 I0409 00:26:19.431130 7 log.go:172] (0xc0013f8460) (1) Data frame handling I0409 00:26:19.431148 7 log.go:172] (0xc0013f8460) (1) Data frame sent I0409 00:26:19.431163 7 log.go:172] (0xc0044d2b00) (0xc0013f8460) Stream removed, broadcasting: 1 I0409 00:26:19.431178 7 log.go:172] (0xc0044d2b00) Go away received I0409 00:26:19.431304 7 log.go:172] (0xc0044d2b00) (0xc0013f8460) Stream removed, broadcasting: 1 I0409 00:26:19.431324 7 log.go:172] (0xc0044d2b00) (0xc0012b0500) Stream removed, broadcasting: 3 I0409 00:26:19.431337 7 log.go:172] (0xc0044d2b00) (0xc001bcaaa0) Stream removed, broadcasting: 5 Apr 9 00:26:19.431: INFO: Exec stderr: "" Apr 9 00:26:19.431: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8687 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:26:19.431: INFO: >>> kubeConfig: /root/.kube/config I0409 00:26:19.462145 7 log.go:172] (0xc00299e9a0) (0xc0012b0a00) Create stream I0409 00:26:19.462172 7 log.go:172] (0xc00299e9a0) (0xc0012b0a00) Stream added, broadcasting: 1 I0409 00:26:19.464678 7 log.go:172] (0xc00299e9a0) Reply frame received for 1 I0409 00:26:19.464728 7 log.go:172] (0xc00299e9a0) (0xc001c035e0) Create stream I0409 00:26:19.464749 7 log.go:172] (0xc00299e9a0) (0xc001c035e0) Stream added, broadcasting: 3 I0409 00:26:19.465771 7 log.go:172] (0xc00299e9a0) Reply frame received for 3 I0409 00:26:19.465817 7 log.go:172] (0xc00299e9a0) (0xc0012b0aa0) Create stream I0409 00:26:19.465831 7 log.go:172] (0xc00299e9a0) (0xc0012b0aa0) Stream added, broadcasting: 5 I0409 00:26:19.466686 7 log.go:172] (0xc00299e9a0) Reply frame received for 5 I0409 00:26:19.529921 7 log.go:172] (0xc00299e9a0) Data frame received for 5 I0409 00:26:19.529970 7 log.go:172] (0xc0012b0aa0) (5) Data frame handling I0409 00:26:19.530002 7 log.go:172] (0xc00299e9a0) Data frame received for 3 I0409 00:26:19.530017 7 log.go:172] (0xc001c035e0) (3) Data frame handling I0409 00:26:19.530029 7 log.go:172] (0xc001c035e0) (3) Data frame sent I0409 00:26:19.530043 7 log.go:172] (0xc00299e9a0) Data frame received for 3 I0409 00:26:19.530061 7 log.go:172] (0xc001c035e0) (3) Data frame handling I0409 00:26:19.531467 7 log.go:172] (0xc00299e9a0) Data frame received for 1 I0409 00:26:19.531495 7 log.go:172] (0xc0012b0a00) (1) Data frame handling I0409 00:26:19.531510 7 log.go:172] (0xc0012b0a00) (1) Data frame sent I0409 00:26:19.531533 7 log.go:172] (0xc00299e9a0) (0xc0012b0a00) Stream removed, broadcasting: 1 I0409 00:26:19.531554 7 log.go:172] (0xc00299e9a0) Go away received I0409 00:26:19.531736 7 log.go:172] (0xc00299e9a0) (0xc0012b0a00) Stream removed, broadcasting: 1 I0409 00:26:19.531764 7 log.go:172] (0xc00299e9a0) (0xc001c035e0) Stream removed, broadcasting: 3 I0409 00:26:19.531777 7 log.go:172] (0xc00299e9a0) (0xc0012b0aa0) Stream removed, broadcasting: 5 Apr 9 00:26:19.531: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 9 00:26:19.531: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8687 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:26:19.531: INFO: >>> kubeConfig: /root/.kube/config I0409 00:26:19.564626 7 log.go:172] (0xc002bb5600) (0xc001bcaf00) Create stream I0409 00:26:19.564648 7 log.go:172] (0xc002bb5600) (0xc001bcaf00) Stream added, broadcasting: 1 I0409 00:26:19.566922 7 log.go:172] (0xc002bb5600) Reply frame received for 1 I0409 00:26:19.566962 7 log.go:172] (0xc002bb5600) (0xc001c03680) Create stream I0409 00:26:19.566979 7 log.go:172] (0xc002bb5600) (0xc001c03680) Stream added, broadcasting: 3 I0409 00:26:19.567869 7 log.go:172] (0xc002bb5600) Reply frame received for 3 I0409 00:26:19.567900 7 log.go:172] (0xc002bb5600) (0xc001c037c0) Create stream I0409 00:26:19.567910 7 log.go:172] (0xc002bb5600) (0xc001c037c0) Stream added, broadcasting: 5 I0409 00:26:19.568836 7 log.go:172] (0xc002bb5600) Reply frame received for 5 I0409 00:26:19.643055 7 log.go:172] (0xc002bb5600) Data frame received for 3 I0409 00:26:19.643093 7 log.go:172] (0xc001c03680) (3) Data frame handling I0409 00:26:19.643112 7 log.go:172] (0xc001c03680) (3) Data frame sent I0409 00:26:19.643129 7 log.go:172] (0xc002bb5600) Data frame received for 3 I0409 00:26:19.643147 7 log.go:172] (0xc001c03680) (3) Data frame handling I0409 00:26:19.643193 7 log.go:172] (0xc002bb5600) Data frame received for 5 I0409 00:26:19.643227 7 log.go:172] (0xc001c037c0) (5) Data frame handling I0409 00:26:19.644751 7 log.go:172] (0xc002bb5600) Data frame received for 1 I0409 00:26:19.644780 7 log.go:172] (0xc001bcaf00) (1) Data frame handling I0409 00:26:19.644819 7 log.go:172] (0xc001bcaf00) (1) Data frame sent I0409 00:26:19.644845 7 log.go:172] (0xc002bb5600) (0xc001bcaf00) Stream removed, broadcasting: 1 I0409 00:26:19.644883 7 log.go:172] (0xc002bb5600) Go away received I0409 00:26:19.644998 7 log.go:172] (0xc002bb5600) (0xc001bcaf00) Stream removed, broadcasting: 1 I0409 00:26:19.645033 7 log.go:172] (0xc002bb5600) (0xc001c03680) Stream removed, broadcasting: 3 I0409 00:26:19.645047 7 log.go:172] (0xc002bb5600) (0xc001c037c0) Stream removed, broadcasting: 5 Apr 9 00:26:19.645: INFO: Exec stderr: "" Apr 9 00:26:19.645: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8687 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:26:19.645: INFO: >>> kubeConfig: /root/.kube/config I0409 00:26:19.683676 7 log.go:172] (0xc002d286e0) (0xc001a78d20) Create stream I0409 00:26:19.683701 7 log.go:172] (0xc002d286e0) (0xc001a78d20) Stream added, broadcasting: 1 I0409 00:26:19.686119 7 log.go:172] (0xc002d286e0) Reply frame received for 1 I0409 00:26:19.686164 7 log.go:172] (0xc002d286e0) (0xc0012b0be0) Create stream I0409 00:26:19.686180 7 log.go:172] (0xc002d286e0) (0xc0012b0be0) Stream added, broadcasting: 3 I0409 00:26:19.687208 7 log.go:172] (0xc002d286e0) Reply frame received for 3 I0409 00:26:19.687245 7 log.go:172] (0xc002d286e0) (0xc0012b0c80) Create stream I0409 00:26:19.687257 7 log.go:172] (0xc002d286e0) (0xc0012b0c80) Stream added, broadcasting: 5 I0409 00:26:19.688195 7 log.go:172] (0xc002d286e0) Reply frame received for 5 I0409 00:26:19.749624 7 log.go:172] (0xc002d286e0) Data frame received for 3 I0409 00:26:19.749653 7 log.go:172] (0xc0012b0be0) (3) Data frame handling I0409 00:26:19.749666 7 log.go:172] (0xc0012b0be0) (3) Data frame sent I0409 00:26:19.749676 7 log.go:172] (0xc002d286e0) Data frame received for 3 I0409 00:26:19.749691 7 log.go:172] (0xc0012b0be0) (3) Data frame handling I0409 00:26:19.749709 7 log.go:172] (0xc002d286e0) Data frame received for 5 I0409 00:26:19.749721 7 log.go:172] (0xc0012b0c80) (5) Data frame handling I0409 00:26:19.751311 7 log.go:172] (0xc002d286e0) Data frame received for 1 I0409 00:26:19.751375 7 log.go:172] (0xc001a78d20) (1) Data frame handling I0409 00:26:19.751410 7 log.go:172] (0xc001a78d20) (1) Data frame sent I0409 00:26:19.751723 7 log.go:172] (0xc002d286e0) (0xc001a78d20) Stream removed, broadcasting: 1 I0409 00:26:19.751775 7 log.go:172] (0xc002d286e0) Go away received I0409 00:26:19.751873 7 log.go:172] (0xc002d286e0) (0xc001a78d20) Stream removed, broadcasting: 1 I0409 00:26:19.751905 7 log.go:172] (0xc002d286e0) (0xc0012b0be0) Stream removed, broadcasting: 3 I0409 00:26:19.751925 7 log.go:172] (0xc002d286e0) (0xc0012b0c80) Stream removed, broadcasting: 5 Apr 9 00:26:19.751: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 9 00:26:19.751: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8687 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:26:19.752: INFO: >>> kubeConfig: /root/.kube/config I0409 00:26:19.786121 7 log.go:172] (0xc00299f130) (0xc0012b1220) Create stream I0409 00:26:19.786155 7 log.go:172] (0xc00299f130) (0xc0012b1220) Stream added, broadcasting: 1 I0409 00:26:19.788473 7 log.go:172] (0xc00299f130) Reply frame received for 1 I0409 00:26:19.788527 7 log.go:172] (0xc00299f130) (0xc0013f8500) Create stream I0409 00:26:19.788544 7 log.go:172] (0xc00299f130) (0xc0013f8500) Stream added, broadcasting: 3 I0409 00:26:19.789649 7 log.go:172] (0xc00299f130) Reply frame received for 3 I0409 00:26:19.789673 7 log.go:172] (0xc00299f130) (0xc001a79220) Create stream I0409 00:26:19.789693 7 log.go:172] (0xc00299f130) (0xc001a79220) Stream added, broadcasting: 5 I0409 00:26:19.790667 7 log.go:172] (0xc00299f130) Reply frame received for 5 I0409 00:26:19.850482 7 log.go:172] (0xc00299f130) Data frame received for 5 I0409 00:26:19.850521 7 log.go:172] (0xc001a79220) (5) Data frame handling I0409 00:26:19.850540 7 log.go:172] (0xc00299f130) Data frame received for 3 I0409 00:26:19.850549 7 log.go:172] (0xc0013f8500) (3) Data frame handling I0409 00:26:19.850555 7 log.go:172] (0xc0013f8500) (3) Data frame sent I0409 00:26:19.850562 7 log.go:172] (0xc00299f130) Data frame received for 3 I0409 00:26:19.850570 7 log.go:172] (0xc0013f8500) (3) Data frame handling I0409 00:26:19.852065 7 log.go:172] (0xc00299f130) Data frame received for 1 I0409 00:26:19.852085 7 log.go:172] (0xc0012b1220) (1) Data frame handling I0409 00:26:19.852095 7 log.go:172] (0xc0012b1220) (1) Data frame sent I0409 00:26:19.852103 7 log.go:172] (0xc00299f130) (0xc0012b1220) Stream removed, broadcasting: 1 I0409 00:26:19.852118 7 log.go:172] (0xc00299f130) Go away received I0409 00:26:19.852270 7 log.go:172] (0xc00299f130) (0xc0012b1220) Stream removed, broadcasting: 1 I0409 00:26:19.852304 7 log.go:172] (0xc00299f130) (0xc0013f8500) Stream removed, broadcasting: 3 I0409 00:26:19.852328 7 log.go:172] (0xc00299f130) (0xc001a79220) Stream removed, broadcasting: 5 Apr 9 00:26:19.852: INFO: Exec stderr: "" Apr 9 00:26:19.852: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8687 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:26:19.852: INFO: >>> kubeConfig: /root/.kube/config I0409 00:26:19.897947 7 log.go:172] (0xc002d28d10) (0xc001a79680) Create stream I0409 00:26:19.897985 7 log.go:172] (0xc002d28d10) (0xc001a79680) Stream added, broadcasting: 1 I0409 00:26:19.901034 7 log.go:172] (0xc002d28d10) Reply frame received for 1 I0409 00:26:19.901058 7 log.go:172] (0xc002d28d10) (0xc001a79720) Create stream I0409 00:26:19.901066 7 log.go:172] (0xc002d28d10) (0xc001a79720) Stream added, broadcasting: 3 I0409 00:26:19.903047 7 log.go:172] (0xc002d28d10) Reply frame received for 3 I0409 00:26:19.903126 7 log.go:172] (0xc002d28d10) (0xc0013f85a0) Create stream I0409 00:26:19.903161 7 log.go:172] (0xc002d28d10) (0xc0013f85a0) Stream added, broadcasting: 5 I0409 00:26:19.904602 7 log.go:172] (0xc002d28d10) Reply frame received for 5 I0409 00:26:19.966980 7 log.go:172] (0xc002d28d10) Data frame received for 5 I0409 00:26:19.967016 7 log.go:172] (0xc0013f85a0) (5) Data frame handling I0409 00:26:19.967039 7 log.go:172] (0xc002d28d10) Data frame received for 3 I0409 00:26:19.967055 7 log.go:172] (0xc001a79720) (3) Data frame handling I0409 00:26:19.967071 7 log.go:172] (0xc001a79720) (3) Data frame sent I0409 00:26:19.967079 7 log.go:172] (0xc002d28d10) Data frame received for 3 I0409 00:26:19.967084 7 log.go:172] (0xc001a79720) (3) Data frame handling I0409 00:26:19.968743 7 log.go:172] (0xc002d28d10) Data frame received for 1 I0409 00:26:19.968771 7 log.go:172] (0xc001a79680) (1) Data frame handling I0409 00:26:19.968794 7 log.go:172] (0xc001a79680) (1) Data frame sent I0409 00:26:19.968817 7 log.go:172] (0xc002d28d10) (0xc001a79680) Stream removed, broadcasting: 1 I0409 00:26:19.968833 7 log.go:172] (0xc002d28d10) Go away received I0409 00:26:19.968932 7 log.go:172] (0xc002d28d10) (0xc001a79680) Stream removed, broadcasting: 1 I0409 00:26:19.968948 7 log.go:172] (0xc002d28d10) (0xc001a79720) Stream removed, broadcasting: 3 I0409 00:26:19.968955 7 log.go:172] (0xc002d28d10) (0xc0013f85a0) Stream removed, broadcasting: 5 Apr 9 00:26:19.968: INFO: Exec stderr: "" Apr 9 00:26:19.968: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8687 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:26:19.969: INFO: >>> kubeConfig: /root/.kube/config I0409 00:26:20.018197 7 log.go:172] (0xc00299f760) (0xc0012b14a0) Create stream I0409 00:26:20.018226 7 log.go:172] (0xc00299f760) (0xc0012b14a0) Stream added, broadcasting: 1 I0409 00:26:20.020677 7 log.go:172] (0xc00299f760) Reply frame received for 1 I0409 00:26:20.020711 7 log.go:172] (0xc00299f760) (0xc0013f88c0) Create stream I0409 00:26:20.020720 7 log.go:172] (0xc00299f760) (0xc0013f88c0) Stream added, broadcasting: 3 I0409 00:26:20.022002 7 log.go:172] (0xc00299f760) Reply frame received for 3 I0409 00:26:20.022040 7 log.go:172] (0xc00299f760) (0xc0013f8a00) Create stream I0409 00:26:20.022055 7 log.go:172] (0xc00299f760) (0xc0013f8a00) Stream added, broadcasting: 5 I0409 00:26:20.023053 7 log.go:172] (0xc00299f760) Reply frame received for 5 I0409 00:26:20.083850 7 log.go:172] (0xc00299f760) Data frame received for 5 I0409 00:26:20.083902 7 log.go:172] (0xc0013f8a00) (5) Data frame handling I0409 00:26:20.083934 7 log.go:172] (0xc00299f760) Data frame received for 3 I0409 00:26:20.083953 7 log.go:172] (0xc0013f88c0) (3) Data frame handling I0409 00:26:20.083974 7 log.go:172] (0xc0013f88c0) (3) Data frame sent I0409 00:26:20.083988 7 log.go:172] (0xc00299f760) Data frame received for 3 I0409 00:26:20.084007 7 log.go:172] (0xc0013f88c0) (3) Data frame handling I0409 00:26:20.085890 7 log.go:172] (0xc00299f760) Data frame received for 1 I0409 00:26:20.085912 7 log.go:172] (0xc0012b14a0) (1) Data frame handling I0409 00:26:20.085927 7 log.go:172] (0xc0012b14a0) (1) Data frame sent I0409 00:26:20.085941 7 log.go:172] (0xc00299f760) (0xc0012b14a0) Stream removed, broadcasting: 1 I0409 00:26:20.086062 7 log.go:172] (0xc00299f760) (0xc0012b14a0) Stream removed, broadcasting: 1 I0409 00:26:20.086100 7 log.go:172] (0xc00299f760) Go away received I0409 00:26:20.086170 7 log.go:172] (0xc00299f760) (0xc0013f88c0) Stream removed, broadcasting: 3 I0409 00:26:20.086196 7 log.go:172] (0xc00299f760) (0xc0013f8a00) Stream removed, broadcasting: 5 Apr 9 00:26:20.086: INFO: Exec stderr: "" Apr 9 00:26:20.086: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8687 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:26:20.086: INFO: >>> kubeConfig: /root/.kube/config I0409 00:26:20.123204 7 log.go:172] (0xc002bb5c30) (0xc001bcb0e0) Create stream I0409 00:26:20.123241 7 log.go:172] (0xc002bb5c30) (0xc001bcb0e0) Stream added, broadcasting: 1 I0409 00:26:20.126404 7 log.go:172] (0xc002bb5c30) Reply frame received for 1 I0409 00:26:20.126436 7 log.go:172] (0xc002bb5c30) (0xc001a797c0) Create stream I0409 00:26:20.126454 7 log.go:172] (0xc002bb5c30) (0xc001a797c0) Stream added, broadcasting: 3 I0409 00:26:20.127505 7 log.go:172] (0xc002bb5c30) Reply frame received for 3 I0409 00:26:20.127551 7 log.go:172] (0xc002bb5c30) (0xc001a79860) Create stream I0409 00:26:20.127566 7 log.go:172] (0xc002bb5c30) (0xc001a79860) Stream added, broadcasting: 5 I0409 00:26:20.128514 7 log.go:172] (0xc002bb5c30) Reply frame received for 5 I0409 00:26:20.190412 7 log.go:172] (0xc002bb5c30) Data frame received for 3 I0409 00:26:20.190465 7 log.go:172] (0xc001a797c0) (3) Data frame handling I0409 00:26:20.190501 7 log.go:172] (0xc001a797c0) (3) Data frame sent I0409 00:26:20.190522 7 log.go:172] (0xc002bb5c30) Data frame received for 3 I0409 00:26:20.190538 7 log.go:172] (0xc001a797c0) (3) Data frame handling I0409 00:26:20.190576 7 log.go:172] (0xc002bb5c30) Data frame received for 5 I0409 00:26:20.190634 7 log.go:172] (0xc001a79860) (5) Data frame handling I0409 00:26:20.191860 7 log.go:172] (0xc002bb5c30) Data frame received for 1 I0409 00:26:20.191885 7 log.go:172] (0xc001bcb0e0) (1) Data frame handling I0409 00:26:20.191900 7 log.go:172] (0xc001bcb0e0) (1) Data frame sent I0409 00:26:20.191938 7 log.go:172] (0xc002bb5c30) (0xc001bcb0e0) Stream removed, broadcasting: 1 I0409 00:26:20.192042 7 log.go:172] (0xc002bb5c30) (0xc001bcb0e0) Stream removed, broadcasting: 1 I0409 00:26:20.192064 7 log.go:172] (0xc002bb5c30) (0xc001a797c0) Stream removed, broadcasting: 3 I0409 00:26:20.192230 7 log.go:172] (0xc002bb5c30) (0xc001a79860) Stream removed, broadcasting: 5 Apr 9 00:26:20.192: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 I0409 00:26:20.192513 7 log.go:172] (0xc002bb5c30) Go away received Apr 9 00:26:20.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8687" for this suite. • [SLOW TEST:11.257 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3506,"failed":0} SS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:26:20.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:26:20.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3856" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":192,"skipped":3508,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:26:20.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:26:20.978: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:26:23.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988781, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988781, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988781, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721988780, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:26:26.078: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 9 00:26:30.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-1440 to-be-attached-pod -i -c=container1' Apr 9 00:26:30.249: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:26:30.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1440" for this suite. STEP: Destroying namespace "webhook-1440-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.995 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":193,"skipped":3509,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:26:30.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:26:30.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b82c32d3-1d71-4570-870b-83d3faa74730" in namespace "downward-api-8039" to be "Succeeded or Failed" Apr 9 00:26:30.441: INFO: Pod "downwardapi-volume-b82c32d3-1d71-4570-870b-83d3faa74730": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093851ms Apr 9 00:26:32.466: INFO: Pod "downwardapi-volume-b82c32d3-1d71-4570-870b-83d3faa74730": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02893892s Apr 9 00:26:34.470: INFO: Pod "downwardapi-volume-b82c32d3-1d71-4570-870b-83d3faa74730": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032910327s STEP: Saw pod success Apr 9 00:26:34.470: INFO: Pod "downwardapi-volume-b82c32d3-1d71-4570-870b-83d3faa74730" satisfied condition "Succeeded or Failed" Apr 9 00:26:34.474: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b82c32d3-1d71-4570-870b-83d3faa74730 container client-container: STEP: delete the pod Apr 9 00:26:34.522: INFO: Waiting for pod downwardapi-volume-b82c32d3-1d71-4570-870b-83d3faa74730 to disappear Apr 9 00:26:34.531: INFO: Pod downwardapi-volume-b82c32d3-1d71-4570-870b-83d3faa74730 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:26:34.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8039" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":194,"skipped":3513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:26:34.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-5d1bd009-9dc4-452e-857a-8deb62820c70 STEP: Creating a pod to test consume secrets Apr 9 00:26:34.655: INFO: Waiting up to 5m0s for pod "pod-secrets-3c6900f8-c41c-4ce4-a4ed-e5bf2d2bb3a7" in namespace "secrets-5997" to be "Succeeded or Failed" Apr 9 00:26:34.663: INFO: Pod "pod-secrets-3c6900f8-c41c-4ce4-a4ed-e5bf2d2bb3a7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.605778ms Apr 9 00:26:36.667: INFO: Pod "pod-secrets-3c6900f8-c41c-4ce4-a4ed-e5bf2d2bb3a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011619922s Apr 9 00:26:38.671: INFO: Pod "pod-secrets-3c6900f8-c41c-4ce4-a4ed-e5bf2d2bb3a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016137354s STEP: Saw pod success Apr 9 00:26:38.671: INFO: Pod "pod-secrets-3c6900f8-c41c-4ce4-a4ed-e5bf2d2bb3a7" satisfied condition "Succeeded or Failed" Apr 9 00:26:38.675: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-3c6900f8-c41c-4ce4-a4ed-e5bf2d2bb3a7 container secret-volume-test: STEP: delete the pod Apr 9 00:26:38.698: INFO: Waiting for pod pod-secrets-3c6900f8-c41c-4ce4-a4ed-e5bf2d2bb3a7 to disappear Apr 9 00:26:38.710: INFO: Pod pod-secrets-3c6900f8-c41c-4ce4-a4ed-e5bf2d2bb3a7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:26:38.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5997" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3549,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:26:38.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-013fbf2b-c7f5-4aa0-9c4f-d2b3d73d241f STEP: Creating a pod to test consume configMaps Apr 9 00:26:38.791: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b69a3d97-3b52-4652-a90c-c5bc6aeadc54" in namespace "projected-8184" to be "Succeeded or Failed" Apr 9 00:26:38.806: INFO: Pod "pod-projected-configmaps-b69a3d97-3b52-4652-a90c-c5bc6aeadc54": Phase="Pending", Reason="", readiness=false. Elapsed: 15.414903ms Apr 9 00:26:40.809: INFO: Pod "pod-projected-configmaps-b69a3d97-3b52-4652-a90c-c5bc6aeadc54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018534864s Apr 9 00:26:42.813: INFO: Pod "pod-projected-configmaps-b69a3d97-3b52-4652-a90c-c5bc6aeadc54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021682758s STEP: Saw pod success Apr 9 00:26:42.813: INFO: Pod "pod-projected-configmaps-b69a3d97-3b52-4652-a90c-c5bc6aeadc54" satisfied condition "Succeeded or Failed" Apr 9 00:26:42.815: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-b69a3d97-3b52-4652-a90c-c5bc6aeadc54 container projected-configmap-volume-test: STEP: delete the pod Apr 9 00:26:42.832: INFO: Waiting for pod pod-projected-configmaps-b69a3d97-3b52-4652-a90c-c5bc6aeadc54 to disappear Apr 9 00:26:42.836: INFO: Pod pod-projected-configmaps-b69a3d97-3b52-4652-a90c-c5bc6aeadc54 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:26:42.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8184" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3570,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:26:42.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6674.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6674.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6674.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6674.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6674.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6674.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 00:26:49.048: INFO: DNS probes using dns-6674/dns-test-b49a1930-2777-4c35-b832-ddc693ff93b6 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:26:49.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6674" for this suite. • [SLOW TEST:6.355 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":197,"skipped":3583,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:26:49.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:26:49.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-485" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":198,"skipped":3587,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:26:49.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-5352 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 9 00:26:49.761: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 9 00:26:49.830: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 9 00:26:51.834: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 9 00:26:53.834: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:26:55.834: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:26:57.834: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:26:59.834: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:27:01.834: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:27:03.834: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:27:05.835: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 9 00:27:05.847: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 9 00:27:09.908: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.188:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5352 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:27:09.908: INFO: >>> kubeConfig: /root/.kube/config I0409 00:27:09.934717 7 log.go:172] (0xc0044d2a50) (0xc000b323c0) Create stream I0409 00:27:09.934741 7 log.go:172] (0xc0044d2a50) (0xc000b323c0) Stream added, broadcasting: 1 I0409 00:27:09.936293 7 log.go:172] (0xc0044d2a50) Reply frame received for 1 I0409 00:27:09.936341 7 log.go:172] (0xc0044d2a50) (0xc000da7d60) Create stream I0409 00:27:09.936356 7 log.go:172] (0xc0044d2a50) (0xc000da7d60) Stream added, broadcasting: 3 I0409 00:27:09.937283 7 log.go:172] (0xc0044d2a50) Reply frame received for 3 I0409 00:27:09.937311 7 log.go:172] (0xc0044d2a50) (0xc000da7ea0) Create stream I0409 00:27:09.937319 7 log.go:172] (0xc0044d2a50) (0xc000da7ea0) Stream added, broadcasting: 5 I0409 00:27:09.938074 7 log.go:172] (0xc0044d2a50) Reply frame received for 5 I0409 00:27:10.035473 7 log.go:172] (0xc0044d2a50) Data frame received for 3 I0409 00:27:10.035527 7 log.go:172] (0xc000da7d60) (3) Data frame handling I0409 00:27:10.035568 7 log.go:172] (0xc000da7d60) (3) Data frame sent I0409 00:27:10.035592 7 log.go:172] (0xc0044d2a50) Data frame received for 3 I0409 00:27:10.035627 7 log.go:172] (0xc0044d2a50) Data frame received for 5 I0409 00:27:10.035668 7 log.go:172] (0xc000da7ea0) (5) Data frame handling I0409 00:27:10.035694 7 log.go:172] (0xc000da7d60) (3) Data frame handling I0409 00:27:10.037694 7 log.go:172] (0xc0044d2a50) Data frame received for 1 I0409 00:27:10.037768 7 log.go:172] (0xc000b323c0) (1) Data frame handling I0409 00:27:10.037805 7 log.go:172] (0xc000b323c0) (1) Data frame sent I0409 00:27:10.037830 7 log.go:172] (0xc0044d2a50) (0xc000b323c0) Stream removed, broadcasting: 1 I0409 00:27:10.037852 7 log.go:172] (0xc0044d2a50) Go away received I0409 00:27:10.038006 7 log.go:172] (0xc0044d2a50) (0xc000b323c0) Stream removed, broadcasting: 1 I0409 00:27:10.038033 7 log.go:172] (0xc0044d2a50) (0xc000da7d60) Stream removed, broadcasting: 3 I0409 00:27:10.038046 7 log.go:172] (0xc0044d2a50) (0xc000da7ea0) Stream removed, broadcasting: 5 Apr 9 00:27:10.038: INFO: Found all expected endpoints: [netserver-0] Apr 9 00:27:10.041: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.212:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5352 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:27:10.041: INFO: >>> kubeConfig: /root/.kube/config I0409 00:27:10.074619 7 log.go:172] (0xc002f12790) (0xc000d5da40) Create stream I0409 00:27:10.074644 7 log.go:172] (0xc002f12790) (0xc000d5da40) Stream added, broadcasting: 1 I0409 00:27:10.076570 7 log.go:172] (0xc002f12790) Reply frame received for 1 I0409 00:27:10.076613 7 log.go:172] (0xc002f12790) (0xc000da7f40) Create stream I0409 00:27:10.076629 7 log.go:172] (0xc002f12790) (0xc000da7f40) Stream added, broadcasting: 3 I0409 00:27:10.078093 7 log.go:172] (0xc002f12790) Reply frame received for 3 I0409 00:27:10.078136 7 log.go:172] (0xc002f12790) (0xc000b32640) Create stream I0409 00:27:10.078151 7 log.go:172] (0xc002f12790) (0xc000b32640) Stream added, broadcasting: 5 I0409 00:27:10.079009 7 log.go:172] (0xc002f12790) Reply frame received for 5 I0409 00:27:10.145853 7 log.go:172] (0xc002f12790) Data frame received for 3 I0409 00:27:10.145897 7 log.go:172] (0xc000da7f40) (3) Data frame handling I0409 00:27:10.145914 7 log.go:172] (0xc000da7f40) (3) Data frame sent I0409 00:27:10.145968 7 log.go:172] (0xc002f12790) Data frame received for 5 I0409 00:27:10.146000 7 log.go:172] (0xc000b32640) (5) Data frame handling I0409 00:27:10.146091 7 log.go:172] (0xc002f12790) Data frame received for 3 I0409 00:27:10.146123 7 log.go:172] (0xc000da7f40) (3) Data frame handling I0409 00:27:10.147872 7 log.go:172] (0xc002f12790) Data frame received for 1 I0409 00:27:10.147913 7 log.go:172] (0xc000d5da40) (1) Data frame handling I0409 00:27:10.147945 7 log.go:172] (0xc000d5da40) (1) Data frame sent I0409 00:27:10.148026 7 log.go:172] (0xc002f12790) (0xc000d5da40) Stream removed, broadcasting: 1 I0409 00:27:10.148069 7 log.go:172] (0xc002f12790) Go away received I0409 00:27:10.148170 7 log.go:172] (0xc002f12790) (0xc000d5da40) Stream removed, broadcasting: 1 I0409 00:27:10.148210 7 log.go:172] (0xc002f12790) (0xc000da7f40) Stream removed, broadcasting: 3 I0409 00:27:10.148223 7 log.go:172] (0xc002f12790) (0xc000b32640) Stream removed, broadcasting: 5 Apr 9 00:27:10.148: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:27:10.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5352" for this suite. • [SLOW TEST:20.572 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3612,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:27:10.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 9 00:27:10.192: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix929400598/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:27:10.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2593" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":200,"skipped":3625,"failed":0} ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:27:10.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-ef17fcf3-839a-465e-b3e2-f7e767bb0f65 in namespace container-probe-4644 Apr 9 00:27:14.339: INFO: Started pod busybox-ef17fcf3-839a-465e-b3e2-f7e767bb0f65 in namespace container-probe-4644 STEP: checking the pod's current state and verifying that restartCount is present Apr 9 00:27:14.342: INFO: Initial restart count of pod busybox-ef17fcf3-839a-465e-b3e2-f7e767bb0f65 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:31:14.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4644" for this suite. • [SLOW TEST:244.673 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3625,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:31:14.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 9 00:31:19.580: INFO: Successfully updated pod "annotationupdatefc1c9ee1-5e8f-4341-9905-8a8b00b7690d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:31:21.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7592" for this suite. • [SLOW TEST:6.665 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3632,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:31:21.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-dca8c636-2c76-4d86-9cfd-c17644b02e1a [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:31:21.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8250" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":203,"skipped":3636,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:31:21.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 9 00:31:28.484: INFO: 0 pods remaining Apr 9 00:31:28.484: INFO: 0 pods has nil DeletionTimestamp Apr 9 00:31:28.484: INFO: Apr 9 00:31:29.481: INFO: 0 pods remaining Apr 9 00:31:29.481: INFO: 0 pods has nil DeletionTimestamp Apr 9 00:31:29.481: INFO: STEP: Gathering metrics W0409 00:31:30.161504 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 9 00:31:30.161: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:31:30.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5812" for this suite. • [SLOW TEST:8.493 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":204,"skipped":3661,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:31:30.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-c8d25df1-ccdc-49ae-a058-dd34f82afe75 STEP: Creating a pod to test consume configMaps Apr 9 00:31:30.690: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7d2703b8-05cc-4518-8b34-cabc86fd469c" in namespace "projected-9270" to be "Succeeded or Failed" Apr 9 00:31:30.800: INFO: Pod "pod-projected-configmaps-7d2703b8-05cc-4518-8b34-cabc86fd469c": Phase="Pending", Reason="", readiness=false. Elapsed: 110.191814ms Apr 9 00:31:32.830: INFO: Pod "pod-projected-configmaps-7d2703b8-05cc-4518-8b34-cabc86fd469c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140295507s Apr 9 00:31:34.833: INFO: Pod "pod-projected-configmaps-7d2703b8-05cc-4518-8b34-cabc86fd469c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143598836s STEP: Saw pod success Apr 9 00:31:34.833: INFO: Pod "pod-projected-configmaps-7d2703b8-05cc-4518-8b34-cabc86fd469c" satisfied condition "Succeeded or Failed" Apr 9 00:31:34.836: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-7d2703b8-05cc-4518-8b34-cabc86fd469c container projected-configmap-volume-test: STEP: delete the pod Apr 9 00:31:34.899: INFO: Waiting for pod pod-projected-configmaps-7d2703b8-05cc-4518-8b34-cabc86fd469c to disappear Apr 9 00:31:34.909: INFO: Pod pod-projected-configmaps-7d2703b8-05cc-4518-8b34-cabc86fd469c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:31:34.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9270" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3661,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:31:34.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 9 00:31:35.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-1834 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 9 00:31:38.018: INFO: stderr: "" Apr 9 00:31:38.018: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 9 00:31:38.018: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 9 00:31:38.018: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1834" to be "running and ready, or succeeded" Apr 9 00:31:38.036: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 17.788043ms Apr 9 00:31:40.038: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020492385s Apr 9 00:31:42.043: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.024754475s Apr 9 00:31:42.043: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 9 00:31:42.043: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 9 00:31:42.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1834' Apr 9 00:31:42.150: INFO: stderr: "" Apr 9 00:31:42.150: INFO: stdout: "I0409 00:31:40.144210 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/hf7x 324\nI0409 00:31:40.344344 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/bqpj 205\nI0409 00:31:40.544422 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/dfw 516\nI0409 00:31:40.744406 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/fh56 498\nI0409 00:31:40.944391 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/jkvx 504\nI0409 00:31:41.144381 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/4dx 301\nI0409 00:31:41.344448 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/jptl 310\nI0409 00:31:41.544406 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/cwt 503\nI0409 00:31:41.744412 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/z77s 320\nI0409 00:31:41.944391 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/c55l 417\nI0409 00:31:42.144402 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/78kq 548\n" STEP: limiting log lines Apr 9 00:31:42.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1834 --tail=1' Apr 9 00:31:42.250: INFO: stderr: "" Apr 9 00:31:42.250: INFO: stdout: "I0409 00:31:42.144402 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/78kq 548\n" Apr 9 00:31:42.250: INFO: got output "I0409 00:31:42.144402 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/78kq 548\n" STEP: limiting log bytes Apr 9 00:31:42.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1834 --limit-bytes=1' Apr 9 00:31:42.351: INFO: stderr: "" Apr 9 00:31:42.351: INFO: stdout: "I" Apr 9 00:31:42.351: INFO: got output "I" STEP: exposing timestamps Apr 9 00:31:42.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1834 --tail=1 --timestamps' Apr 9 00:31:42.458: INFO: stderr: "" Apr 9 00:31:42.458: INFO: stdout: "2020-04-09T00:31:42.344549301Z I0409 00:31:42.344396 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/9fk 245\n" Apr 9 00:31:42.458: INFO: got output "2020-04-09T00:31:42.344549301Z I0409 00:31:42.344396 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/9fk 245\n" STEP: restricting to a time range Apr 9 00:31:44.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1834 --since=1s' Apr 9 00:31:45.067: INFO: stderr: "" Apr 9 00:31:45.067: INFO: stdout: "I0409 00:31:44.144384 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/4vf7 285\nI0409 00:31:44.344417 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/k9m6 210\nI0409 00:31:44.544434 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/mpb 418\nI0409 00:31:44.744412 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/h5j 557\nI0409 00:31:44.944442 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/rhvj 441\n" Apr 9 00:31:45.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1834 --since=24h' Apr 9 00:31:45.183: INFO: stderr: "" Apr 9 00:31:45.183: INFO: stdout: "I0409 00:31:40.144210 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/hf7x 324\nI0409 00:31:40.344344 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/bqpj 205\nI0409 00:31:40.544422 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/dfw 516\nI0409 00:31:40.744406 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/fh56 498\nI0409 00:31:40.944391 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/jkvx 504\nI0409 00:31:41.144381 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/4dx 301\nI0409 00:31:41.344448 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/jptl 310\nI0409 00:31:41.544406 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/cwt 503\nI0409 00:31:41.744412 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/z77s 320\nI0409 00:31:41.944391 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/c55l 417\nI0409 00:31:42.144402 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/78kq 548\nI0409 00:31:42.344396 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/9fk 245\nI0409 00:31:42.544376 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/dng 393\nI0409 00:31:42.744372 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/7lp 474\nI0409 00:31:42.944397 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/t7f 467\nI0409 00:31:43.144335 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/kkj 323\nI0409 00:31:43.344401 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/js7 484\nI0409 00:31:43.544395 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/tq2v 459\nI0409 00:31:43.744447 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/gdr8 246\nI0409 00:31:43.944388 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/2wr 523\nI0409 00:31:44.144384 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/4vf7 285\nI0409 00:31:44.344417 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/k9m6 210\nI0409 00:31:44.544434 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/mpb 418\nI0409 00:31:44.744412 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/h5j 557\nI0409 00:31:44.944442 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/rhvj 441\nI0409 00:31:45.144395 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/jk8 482\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 9 00:31:45.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1834' Apr 9 00:31:52.755: INFO: stderr: "" Apr 9 00:31:52.755: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:31:52.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1834" for this suite. • [SLOW TEST:17.851 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":206,"skipped":3669,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:31:52.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-qn66 STEP: Creating a pod to test atomic-volume-subpath Apr 9 00:31:52.851: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qn66" in namespace "subpath-5514" to be "Succeeded or Failed" Apr 9 00:31:52.855: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324353ms Apr 9 00:31:54.859: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00789674s Apr 9 00:31:56.872: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Running", Reason="", readiness=true. Elapsed: 4.02073673s Apr 9 00:31:58.876: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Running", Reason="", readiness=true. Elapsed: 6.024844055s Apr 9 00:32:00.890: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Running", Reason="", readiness=true. Elapsed: 8.039096622s Apr 9 00:32:02.894: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Running", Reason="", readiness=true. Elapsed: 10.042758419s Apr 9 00:32:04.898: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Running", Reason="", readiness=true. Elapsed: 12.046995345s Apr 9 00:32:06.902: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Running", Reason="", readiness=true. Elapsed: 14.051364457s Apr 9 00:32:08.906: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Running", Reason="", readiness=true. Elapsed: 16.05558214s Apr 9 00:32:10.920: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Running", Reason="", readiness=true. Elapsed: 18.06901345s Apr 9 00:32:12.924: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Running", Reason="", readiness=true. Elapsed: 20.072798866s Apr 9 00:32:14.928: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Running", Reason="", readiness=true. Elapsed: 22.077200072s Apr 9 00:32:16.950: INFO: Pod "pod-subpath-test-configmap-qn66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.09926486s STEP: Saw pod success Apr 9 00:32:16.950: INFO: Pod "pod-subpath-test-configmap-qn66" satisfied condition "Succeeded or Failed" Apr 9 00:32:16.958: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-qn66 container test-container-subpath-configmap-qn66: STEP: delete the pod Apr 9 00:32:16.995: INFO: Waiting for pod pod-subpath-test-configmap-qn66 to disappear Apr 9 00:32:17.000: INFO: Pod pod-subpath-test-configmap-qn66 no longer exists STEP: Deleting pod pod-subpath-test-configmap-qn66 Apr 9 00:32:17.000: INFO: Deleting pod "pod-subpath-test-configmap-qn66" in namespace "subpath-5514" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:32:17.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5514" for this suite. • [SLOW TEST:24.257 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":207,"skipped":3706,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:32:17.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:32:17.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69df5cc2-cad4-427d-a7b2-39a44b8694bc" in namespace "projected-9039" to be "Succeeded or Failed" Apr 9 00:32:17.116: INFO: Pod "downwardapi-volume-69df5cc2-cad4-427d-a7b2-39a44b8694bc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.15219ms Apr 9 00:32:19.232: INFO: Pod "downwardapi-volume-69df5cc2-cad4-427d-a7b2-39a44b8694bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118568823s Apr 9 00:32:21.236: INFO: Pod "downwardapi-volume-69df5cc2-cad4-427d-a7b2-39a44b8694bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123119347s STEP: Saw pod success Apr 9 00:32:21.236: INFO: Pod "downwardapi-volume-69df5cc2-cad4-427d-a7b2-39a44b8694bc" satisfied condition "Succeeded or Failed" Apr 9 00:32:21.240: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-69df5cc2-cad4-427d-a7b2-39a44b8694bc container client-container: STEP: delete the pod Apr 9 00:32:21.267: INFO: Waiting for pod downwardapi-volume-69df5cc2-cad4-427d-a7b2-39a44b8694bc to disappear Apr 9 00:32:21.282: INFO: Pod downwardapi-volume-69df5cc2-cad4-427d-a7b2-39a44b8694bc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:32:21.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9039" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3721,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:32:21.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:32:21.353: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46088580-df87-4704-86b7-408120c1e1c9" in namespace "downward-api-7347" to be "Succeeded or Failed" Apr 9 00:32:21.356: INFO: Pod "downwardapi-volume-46088580-df87-4704-86b7-408120c1e1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.565088ms Apr 9 00:32:23.369: INFO: Pod "downwardapi-volume-46088580-df87-4704-86b7-408120c1e1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016169674s Apr 9 00:32:25.373: INFO: Pod "downwardapi-volume-46088580-df87-4704-86b7-408120c1e1c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019714583s STEP: Saw pod success Apr 9 00:32:25.373: INFO: Pod "downwardapi-volume-46088580-df87-4704-86b7-408120c1e1c9" satisfied condition "Succeeded or Failed" Apr 9 00:32:25.375: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-46088580-df87-4704-86b7-408120c1e1c9 container client-container: STEP: delete the pod Apr 9 00:32:25.424: INFO: Waiting for pod downwardapi-volume-46088580-df87-4704-86b7-408120c1e1c9 to disappear Apr 9 00:32:25.428: INFO: Pod downwardapi-volume-46088580-df87-4704-86b7-408120c1e1c9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:32:25.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7347" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":209,"skipped":3736,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:32:25.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5342.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5342.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5342.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5342.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5342.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5342.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 00:32:31.557: INFO: DNS probes using dns-5342/dns-test-6a157ce3-c490-4670-bf9c-2368a260ca69 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:32:31.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5342" for this suite. • [SLOW TEST:6.221 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":210,"skipped":3751,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:32:31.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2504 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 9 00:32:31.696: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 9 00:32:31.994: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 9 00:32:34.160: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 9 00:32:35.998: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:32:37.999: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:32:39.999: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:32:41.999: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:32:43.998: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:32:45.999: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:32:47.999: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:32:49.999: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:32:51.999: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 9 00:32:53.999: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 9 00:32:54.004: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 9 00:32:56.008: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 9 00:33:00.046: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.200:8080/dial?request=hostname&protocol=udp&host=10.244.2.199&port=8081&tries=1'] Namespace:pod-network-test-2504 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:33:00.046: INFO: >>> kubeConfig: /root/.kube/config I0409 00:33:00.083025 7 log.go:172] (0xc002f12370) (0xc001bcb040) Create stream I0409 00:33:00.083059 7 log.go:172] (0xc002f12370) (0xc001bcb040) Stream added, broadcasting: 1 I0409 00:33:00.085412 7 log.go:172] (0xc002f12370) Reply frame received for 1 I0409 00:33:00.085484 7 log.go:172] (0xc002f12370) (0xc0012b0000) Create stream I0409 00:33:00.085517 7 log.go:172] (0xc002f12370) (0xc0012b0000) Stream added, broadcasting: 3 I0409 00:33:00.086842 7 log.go:172] (0xc002f12370) Reply frame received for 3 I0409 00:33:00.086896 7 log.go:172] (0xc002f12370) (0xc001bcb0e0) Create stream I0409 00:33:00.086912 7 log.go:172] (0xc002f12370) (0xc001bcb0e0) Stream added, broadcasting: 5 I0409 00:33:00.088161 7 log.go:172] (0xc002f12370) Reply frame received for 5 I0409 00:33:00.186808 7 log.go:172] (0xc002f12370) Data frame received for 3 I0409 00:33:00.186852 7 log.go:172] (0xc0012b0000) (3) Data frame handling I0409 00:33:00.186887 7 log.go:172] (0xc0012b0000) (3) Data frame sent I0409 00:33:00.187166 7 log.go:172] (0xc002f12370) Data frame received for 5 I0409 00:33:00.187227 7 log.go:172] (0xc001bcb0e0) (5) Data frame handling I0409 00:33:00.187259 7 log.go:172] (0xc002f12370) Data frame received for 3 I0409 00:33:00.187277 7 log.go:172] (0xc0012b0000) (3) Data frame handling I0409 00:33:00.189301 7 log.go:172] (0xc002f12370) Data frame received for 1 I0409 00:33:00.189328 7 log.go:172] (0xc001bcb040) (1) Data frame handling I0409 00:33:00.189349 7 log.go:172] (0xc001bcb040) (1) Data frame sent I0409 00:33:00.189367 7 log.go:172] (0xc002f12370) (0xc001bcb040) Stream removed, broadcasting: 1 I0409 00:33:00.189419 7 log.go:172] (0xc002f12370) Go away received I0409 00:33:00.189481 7 log.go:172] (0xc002f12370) (0xc001bcb040) Stream removed, broadcasting: 1 I0409 00:33:00.189507 7 log.go:172] (0xc002f12370) (0xc0012b0000) Stream removed, broadcasting: 3 I0409 00:33:00.189520 7 log.go:172] (0xc002f12370) (0xc001bcb0e0) Stream removed, broadcasting: 5 Apr 9 00:33:00.189: INFO: Waiting for responses: map[] Apr 9 00:33:00.193: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.200:8080/dial?request=hostname&protocol=udp&host=10.244.1.222&port=8081&tries=1'] Namespace:pod-network-test-2504 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:33:00.193: INFO: >>> kubeConfig: /root/.kube/config I0409 00:33:00.226612 7 log.go:172] (0xc002f12840) (0xc001bcbb80) Create stream I0409 00:33:00.226647 7 log.go:172] (0xc002f12840) (0xc001bcbb80) Stream added, broadcasting: 1 I0409 00:33:00.228318 7 log.go:172] (0xc002f12840) Reply frame received for 1 I0409 00:33:00.228353 7 log.go:172] (0xc002f12840) (0xc001bcbe00) Create stream I0409 00:33:00.228367 7 log.go:172] (0xc002f12840) (0xc001bcbe00) Stream added, broadcasting: 3 I0409 00:33:00.229670 7 log.go:172] (0xc002f12840) Reply frame received for 3 I0409 00:33:00.229711 7 log.go:172] (0xc002f12840) (0xc0009deaa0) Create stream I0409 00:33:00.229725 7 log.go:172] (0xc002f12840) (0xc0009deaa0) Stream added, broadcasting: 5 I0409 00:33:00.230944 7 log.go:172] (0xc002f12840) Reply frame received for 5 I0409 00:33:00.294988 7 log.go:172] (0xc002f12840) Data frame received for 3 I0409 00:33:00.295025 7 log.go:172] (0xc001bcbe00) (3) Data frame handling I0409 00:33:00.295061 7 log.go:172] (0xc001bcbe00) (3) Data frame sent I0409 00:33:00.295380 7 log.go:172] (0xc002f12840) Data frame received for 5 I0409 00:33:00.295410 7 log.go:172] (0xc0009deaa0) (5) Data frame handling I0409 00:33:00.295440 7 log.go:172] (0xc002f12840) Data frame received for 3 I0409 00:33:00.295452 7 log.go:172] (0xc001bcbe00) (3) Data frame handling I0409 00:33:00.297047 7 log.go:172] (0xc002f12840) Data frame received for 1 I0409 00:33:00.297078 7 log.go:172] (0xc001bcbb80) (1) Data frame handling I0409 00:33:00.297106 7 log.go:172] (0xc001bcbb80) (1) Data frame sent I0409 00:33:00.297280 7 log.go:172] (0xc002f12840) (0xc001bcbb80) Stream removed, broadcasting: 1 I0409 00:33:00.297363 7 log.go:172] (0xc002f12840) Go away received I0409 00:33:00.297437 7 log.go:172] (0xc002f12840) (0xc001bcbb80) Stream removed, broadcasting: 1 I0409 00:33:00.297457 7 log.go:172] (0xc002f12840) (0xc001bcbe00) Stream removed, broadcasting: 3 I0409 00:33:00.297467 7 log.go:172] (0xc002f12840) (0xc0009deaa0) Stream removed, broadcasting: 5 Apr 9 00:33:00.297: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:33:00.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2504" for this suite. • [SLOW TEST:28.649 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3761,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:33:00.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-365581d1-0a6e-4696-bc0a-b04e224ea79c in namespace container-probe-2093 Apr 9 00:33:04.460: INFO: Started pod test-webserver-365581d1-0a6e-4696-bc0a-b04e224ea79c in namespace container-probe-2093 STEP: checking the pod's current state and verifying that restartCount is present Apr 9 00:33:04.463: INFO: Initial restart count of pod test-webserver-365581d1-0a6e-4696-bc0a-b04e224ea79c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:37:05.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2093" for this suite. • [SLOW TEST:244.817 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3769,"failed":0} SS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:37:05.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:37:05.244: INFO: Creating deployment "test-recreate-deployment" Apr 9 00:37:05.248: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 9 00:37:05.429: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 9 00:37:07.437: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 9 00:37:07.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989425, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989425, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989425, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989425, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 9 00:37:09.444: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 9 00:37:09.451: INFO: Updating deployment test-recreate-deployment Apr 9 00:37:09.451: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 9 00:37:09.889: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-286 /apis/apps/v1/namespaces/deployment-286/deployments/test-recreate-deployment 3873e60e-ad83-46ad-848b-b61eb6265c45 6552246 2 2020-04-09 00:37:05 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b28e08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-09 00:37:09 +0000 UTC,LastTransitionTime:2020-04-09 00:37:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-09 00:37:09 +0000 UTC,LastTransitionTime:2020-04-09 00:37:05 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 9 00:37:09.893: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-286 /apis/apps/v1/namespaces/deployment-286/replicasets/test-recreate-deployment-5f94c574ff 3fea556b-8630-4bf8-a0c5-dd8b5f721715 6552245 1 2020-04-09 00:37:09 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 3873e60e-ad83-46ad-848b-b61eb6265c45 0xc003b29247 0xc003b29248}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b292a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 9 00:37:09.893: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 9 00:37:09.893: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-286 /apis/apps/v1/namespaces/deployment-286/replicasets/test-recreate-deployment-846c7dd955 132434a4-216e-4f7f-9438-07600cf963ca 6552235 2 2020-04-09 00:37:05 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 3873e60e-ad83-46ad-848b-b61eb6265c45 0xc003b29487 0xc003b29488}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b294f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 9 00:37:09.948: INFO: Pod "test-recreate-deployment-5f94c574ff-2lqrg" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-2lqrg test-recreate-deployment-5f94c574ff- deployment-286 /api/v1/namespaces/deployment-286/pods/test-recreate-deployment-5f94c574ff-2lqrg 4503d92e-014f-4f7d-979c-c0e6d3376a8a 6552247 0 2020-04-09 00:37:09 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 3fea556b-8630-4bf8-a0c5-dd8b5f721715 0xc002f08237 0xc002f08238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2mjc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2mjc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2mjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 00:37:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 00:37:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 00:37:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-09 00:37:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-09 00:37:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:37:09.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-286" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":213,"skipped":3771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:37:09.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 9 00:37:10.176: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:37:10.180: INFO: Number of nodes with available pods: 0 Apr 9 00:37:10.180: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:37:11.186: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:37:11.190: INFO: Number of nodes with available pods: 0 Apr 9 00:37:11.190: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:37:12.205: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:37:12.329: INFO: Number of nodes with available pods: 0 Apr 9 00:37:12.329: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:37:13.185: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:37:13.188: INFO: Number of nodes with available pods: 0 Apr 9 00:37:13.188: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:37:14.186: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:37:14.224: INFO: Number of nodes with available pods: 2 Apr 9 00:37:14.224: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 9 00:37:14.236: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:37:14.253: INFO: Number of nodes with available pods: 1 Apr 9 00:37:14.253: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:37:15.259: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:37:15.262: INFO: Number of nodes with available pods: 1 Apr 9 00:37:15.262: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:37:16.258: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:37:16.261: INFO: Number of nodes with available pods: 1 Apr 9 00:37:16.261: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:37:17.259: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:37:17.263: INFO: Number of nodes with available pods: 1 Apr 9 00:37:17.263: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:37:18.280: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:37:18.308: INFO: Number of nodes with available pods: 2 Apr 9 00:37:18.308: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7185, will wait for the garbage collector to delete the pods Apr 9 00:37:18.397: INFO: Deleting DaemonSet.extensions daemon-set took: 6.783701ms Apr 9 00:37:18.497: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.273222ms Apr 9 00:37:21.500: INFO: Number of nodes with available pods: 0 Apr 9 00:37:21.500: INFO: Number of running nodes: 0, number of available pods: 0 Apr 9 00:37:21.503: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7185/daemonsets","resourceVersion":"6552374"},"items":null} Apr 9 00:37:21.505: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7185/pods","resourceVersion":"6552374"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:37:21.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7185" for this suite. • [SLOW TEST:11.535 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":214,"skipped":3801,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:37:21.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:37:35.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8601" for this suite. • [SLOW TEST:14.079 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":215,"skipped":3811,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:37:35.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 9 00:37:35.665: INFO: Waiting up to 5m0s for pod "pod-92a3c128-bf50-441f-89f5-46214dc93d70" in namespace "emptydir-2947" to be "Succeeded or Failed" Apr 9 00:37:35.679: INFO: Pod "pod-92a3c128-bf50-441f-89f5-46214dc93d70": Phase="Pending", Reason="", readiness=false. Elapsed: 14.257923ms Apr 9 00:37:37.682: INFO: Pod "pod-92a3c128-bf50-441f-89f5-46214dc93d70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017651381s Apr 9 00:37:39.686: INFO: Pod "pod-92a3c128-bf50-441f-89f5-46214dc93d70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021752958s STEP: Saw pod success Apr 9 00:37:39.686: INFO: Pod "pod-92a3c128-bf50-441f-89f5-46214dc93d70" satisfied condition "Succeeded or Failed" Apr 9 00:37:39.690: INFO: Trying to get logs from node latest-worker2 pod pod-92a3c128-bf50-441f-89f5-46214dc93d70 container test-container: STEP: delete the pod Apr 9 00:37:39.748: INFO: Waiting for pod pod-92a3c128-bf50-441f-89f5-46214dc93d70 to disappear Apr 9 00:37:39.763: INFO: Pod pod-92a3c128-bf50-441f-89f5-46214dc93d70 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:37:39.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2947" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3831,"failed":0} ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:37:39.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:37:44.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-331" for this suite. • [SLOW TEST:5.110 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":217,"skipped":3831,"failed":0} [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:37:44.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:37:44.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6430" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":218,"skipped":3831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:37:45.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9183 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9183 I0409 00:37:45.200274 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9183, replica count: 2 I0409 00:37:48.250786 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 00:37:51.251026 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 9 00:37:51.251: INFO: Creating new exec pod Apr 9 00:37:56.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9183 execpodk595l -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 9 00:37:56.511: INFO: stderr: "I0409 00:37:56.410529 2476 log.go:172] (0xc0000e8630) (0xc0003eaa00) Create stream\nI0409 00:37:56.410575 2476 log.go:172] (0xc0000e8630) (0xc0003eaa00) Stream added, broadcasting: 1\nI0409 00:37:56.412800 2476 log.go:172] (0xc0000e8630) Reply frame received for 1\nI0409 00:37:56.412841 2476 log.go:172] (0xc0000e8630) (0xc000904000) Create stream\nI0409 00:37:56.412852 2476 log.go:172] (0xc0000e8630) (0xc000904000) Stream added, broadcasting: 3\nI0409 00:37:56.413951 2476 log.go:172] (0xc0000e8630) Reply frame received for 3\nI0409 00:37:56.414003 2476 log.go:172] (0xc0000e8630) (0xc0009040a0) Create stream\nI0409 00:37:56.414026 2476 log.go:172] (0xc0000e8630) (0xc0009040a0) Stream added, broadcasting: 5\nI0409 00:37:56.414917 2476 log.go:172] (0xc0000e8630) Reply frame received for 5\nI0409 00:37:56.503516 2476 log.go:172] (0xc0000e8630) Data frame received for 5\nI0409 00:37:56.503546 2476 log.go:172] (0xc0009040a0) (5) Data frame handling\nI0409 00:37:56.503562 2476 log.go:172] (0xc0009040a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0409 00:37:56.503766 2476 log.go:172] (0xc0000e8630) Data frame received for 5\nI0409 00:37:56.503789 2476 log.go:172] (0xc0009040a0) (5) Data frame handling\nI0409 00:37:56.503813 2476 log.go:172] (0xc0009040a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0409 00:37:56.504343 2476 log.go:172] (0xc0000e8630) Data frame received for 5\nI0409 00:37:56.504377 2476 log.go:172] (0xc0009040a0) (5) Data frame handling\nI0409 00:37:56.504398 2476 log.go:172] (0xc0000e8630) Data frame received for 3\nI0409 00:37:56.504413 2476 log.go:172] (0xc000904000) (3) Data frame handling\nI0409 00:37:56.506423 2476 log.go:172] (0xc0000e8630) Data frame received for 1\nI0409 00:37:56.506443 2476 log.go:172] (0xc0003eaa00) (1) Data frame handling\nI0409 00:37:56.506458 2476 log.go:172] (0xc0003eaa00) (1) Data frame sent\nI0409 00:37:56.506579 2476 log.go:172] (0xc0000e8630) (0xc0003eaa00) Stream removed, broadcasting: 1\nI0409 00:37:56.506967 2476 log.go:172] (0xc0000e8630) (0xc0003eaa00) Stream removed, broadcasting: 1\nI0409 00:37:56.506982 2476 log.go:172] (0xc0000e8630) (0xc000904000) Stream removed, broadcasting: 3\nI0409 00:37:56.507131 2476 log.go:172] (0xc0000e8630) (0xc0009040a0) Stream removed, broadcasting: 5\n" Apr 9 00:37:56.511: INFO: stdout: "" Apr 9 00:37:56.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9183 execpodk595l -- /bin/sh -x -c nc -zv -t -w 2 10.96.88.105 80' Apr 9 00:37:56.710: INFO: stderr: "I0409 00:37:56.641346 2497 log.go:172] (0xc000a520b0) (0xc0006d32c0) Create stream\nI0409 00:37:56.641427 2497 log.go:172] (0xc000a520b0) (0xc0006d32c0) Stream added, broadcasting: 1\nI0409 00:37:56.644183 2497 log.go:172] (0xc000a520b0) Reply frame received for 1\nI0409 00:37:56.644229 2497 log.go:172] (0xc000a520b0) (0xc00075c0a0) Create stream\nI0409 00:37:56.644239 2497 log.go:172] (0xc000a520b0) (0xc00075c0a0) Stream added, broadcasting: 3\nI0409 00:37:56.645461 2497 log.go:172] (0xc000a520b0) Reply frame received for 3\nI0409 00:37:56.645508 2497 log.go:172] (0xc000a520b0) (0xc0003d4000) Create stream\nI0409 00:37:56.645521 2497 log.go:172] (0xc000a520b0) (0xc0003d4000) Stream added, broadcasting: 5\nI0409 00:37:56.646498 2497 log.go:172] (0xc000a520b0) Reply frame received for 5\nI0409 00:37:56.705336 2497 log.go:172] (0xc000a520b0) Data frame received for 5\nI0409 00:37:56.705385 2497 log.go:172] (0xc0003d4000) (5) Data frame handling\nI0409 00:37:56.705396 2497 log.go:172] (0xc0003d4000) (5) Data frame sent\nI0409 00:37:56.705405 2497 log.go:172] (0xc000a520b0) Data frame received for 5\nI0409 00:37:56.705414 2497 log.go:172] (0xc0003d4000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.88.105 80\nConnection to 10.96.88.105 80 port [tcp/http] succeeded!\nI0409 00:37:56.705438 2497 log.go:172] (0xc000a520b0) Data frame received for 3\nI0409 00:37:56.705446 2497 log.go:172] (0xc00075c0a0) (3) Data frame handling\nI0409 00:37:56.706745 2497 log.go:172] (0xc000a520b0) Data frame received for 1\nI0409 00:37:56.706775 2497 log.go:172] (0xc0006d32c0) (1) Data frame handling\nI0409 00:37:56.706791 2497 log.go:172] (0xc0006d32c0) (1) Data frame sent\nI0409 00:37:56.706805 2497 log.go:172] (0xc000a520b0) (0xc0006d32c0) Stream removed, broadcasting: 1\nI0409 00:37:56.706829 2497 log.go:172] (0xc000a520b0) Go away received\nI0409 00:37:56.707144 2497 log.go:172] (0xc000a520b0) (0xc0006d32c0) Stream removed, broadcasting: 1\nI0409 00:37:56.707160 2497 log.go:172] (0xc000a520b0) (0xc00075c0a0) Stream removed, broadcasting: 3\nI0409 00:37:56.707166 2497 log.go:172] (0xc000a520b0) (0xc0003d4000) Stream removed, broadcasting: 5\n" Apr 9 00:37:56.710: INFO: stdout: "" Apr 9 00:37:56.710: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:37:56.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9183" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.740 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":219,"skipped":3857,"failed":0} [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:37:56.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 9 00:38:01.334: INFO: Successfully updated pod "pod-update-activedeadlineseconds-83f35af5-bc4d-43da-b7ff-57bbb40c1690" Apr 9 00:38:01.334: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-83f35af5-bc4d-43da-b7ff-57bbb40c1690" in namespace "pods-8304" to be "terminated due to deadline exceeded" Apr 9 00:38:01.350: INFO: Pod "pod-update-activedeadlineseconds-83f35af5-bc4d-43da-b7ff-57bbb40c1690": Phase="Running", Reason="", readiness=true. Elapsed: 15.300749ms Apr 9 00:38:03.354: INFO: Pod "pod-update-activedeadlineseconds-83f35af5-bc4d-43da-b7ff-57bbb40c1690": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.019894109s Apr 9 00:38:03.354: INFO: Pod "pod-update-activedeadlineseconds-83f35af5-bc4d-43da-b7ff-57bbb40c1690" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:38:03.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8304" for this suite. • [SLOW TEST:6.648 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3857,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:38:03.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-8185db53-2f1b-4fb2-a789-b35e604804fd STEP: Creating a pod to test consume configMaps Apr 9 00:38:03.471: INFO: Waiting up to 5m0s for pod "pod-configmaps-d5d6e22d-2285-4122-ace2-a6ed2ae4b96b" in namespace "configmap-2858" to be "Succeeded or Failed" Apr 9 00:38:03.475: INFO: Pod "pod-configmaps-d5d6e22d-2285-4122-ace2-a6ed2ae4b96b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.774016ms Apr 9 00:38:05.479: INFO: Pod "pod-configmaps-d5d6e22d-2285-4122-ace2-a6ed2ae4b96b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007986465s Apr 9 00:38:07.483: INFO: Pod "pod-configmaps-d5d6e22d-2285-4122-ace2-a6ed2ae4b96b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011728461s STEP: Saw pod success Apr 9 00:38:07.483: INFO: Pod "pod-configmaps-d5d6e22d-2285-4122-ace2-a6ed2ae4b96b" satisfied condition "Succeeded or Failed" Apr 9 00:38:07.485: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d5d6e22d-2285-4122-ace2-a6ed2ae4b96b container configmap-volume-test: STEP: delete the pod Apr 9 00:38:07.550: INFO: Waiting for pod pod-configmaps-d5d6e22d-2285-4122-ace2-a6ed2ae4b96b to disappear Apr 9 00:38:07.554: INFO: Pod pod-configmaps-d5d6e22d-2285-4122-ace2-a6ed2ae4b96b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:38:07.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2858" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3882,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:38:07.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9710.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9710.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9710.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9710.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9710.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9710.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9710.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9710.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 106.196.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.196.106_udp@PTR;check="$$(dig +tcp +noall +answer +search 106.196.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.196.106_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9710.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9710.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9710.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9710.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9710.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9710.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9710.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9710.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9710.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9710.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 106.196.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.196.106_udp@PTR;check="$$(dig +tcp +noall +answer +search 106.196.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.196.106_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 00:38:13.756: INFO: Unable to read wheezy_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:13.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:13.761: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:13.764: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:13.782: INFO: Unable to read jessie_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:13.784: INFO: Unable to read jessie_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:13.787: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:13.789: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:13.805: INFO: Lookups using dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973 failed for: [wheezy_udp@dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_udp@dns-test-service.dns-9710.svc.cluster.local jessie_tcp@dns-test-service.dns-9710.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local] Apr 9 00:38:18.810: INFO: Unable to read wheezy_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:18.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:18.817: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:18.820: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:18.842: INFO: Unable to read jessie_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:18.844: INFO: Unable to read jessie_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:18.847: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:18.849: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:18.865: INFO: Lookups using dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973 failed for: [wheezy_udp@dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_udp@dns-test-service.dns-9710.svc.cluster.local jessie_tcp@dns-test-service.dns-9710.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local] Apr 9 00:38:23.810: INFO: Unable to read wheezy_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:23.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:23.817: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:23.821: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:23.848: INFO: Unable to read jessie_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:23.850: INFO: Unable to read jessie_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:23.852: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:23.855: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:23.872: INFO: Lookups using dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973 failed for: [wheezy_udp@dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_udp@dns-test-service.dns-9710.svc.cluster.local jessie_tcp@dns-test-service.dns-9710.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local] Apr 9 00:38:28.812: INFO: Unable to read wheezy_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:28.815: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:28.819: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:28.822: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:28.842: INFO: Unable to read jessie_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:28.845: INFO: Unable to read jessie_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:28.848: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:28.851: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:28.868: INFO: Lookups using dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973 failed for: [wheezy_udp@dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_udp@dns-test-service.dns-9710.svc.cluster.local jessie_tcp@dns-test-service.dns-9710.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local] Apr 9 00:38:33.810: INFO: Unable to read wheezy_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:33.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:33.817: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:33.820: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:33.842: INFO: Unable to read jessie_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:33.847: INFO: Unable to read jessie_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:33.850: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:33.854: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:33.870: INFO: Lookups using dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973 failed for: [wheezy_udp@dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_udp@dns-test-service.dns-9710.svc.cluster.local jessie_tcp@dns-test-service.dns-9710.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local] Apr 9 00:38:38.810: INFO: Unable to read wheezy_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:38.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:38.817: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:38.821: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:38.844: INFO: Unable to read jessie_udp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:38.847: INFO: Unable to read jessie_tcp@dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:38.850: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:38.853: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local from pod dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973: the server could not find the requested resource (get pods dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973) Apr 9 00:38:38.871: INFO: Lookups using dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973 failed for: [wheezy_udp@dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@dns-test-service.dns-9710.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_udp@dns-test-service.dns-9710.svc.cluster.local jessie_tcp@dns-test-service.dns-9710.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9710.svc.cluster.local] Apr 9 00:38:43.868: INFO: DNS probes using dns-9710/dns-test-6627f1ed-c8b2-49aa-af7d-83b997d98973 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:38:44.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9710" for this suite. • [SLOW TEST:36.862 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":222,"skipped":3889,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:38:44.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:38:44.812: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:38:46.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989524, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989524, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989524, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989524, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:38:49.855: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:00.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4803" for this suite. STEP: Destroying namespace "webhook-4803-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.707 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":223,"skipped":3901,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:00.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 9 00:39:00.209: INFO: Waiting up to 5m0s for pod "pod-18c2c3c6-6377-497e-86e0-b0a8d689a883" in namespace "emptydir-7855" to be "Succeeded or Failed" Apr 9 00:39:00.220: INFO: Pod "pod-18c2c3c6-6377-497e-86e0-b0a8d689a883": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02995ms Apr 9 00:39:02.223: INFO: Pod "pod-18c2c3c6-6377-497e-86e0-b0a8d689a883": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013559471s Apr 9 00:39:04.227: INFO: Pod "pod-18c2c3c6-6377-497e-86e0-b0a8d689a883": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017813233s STEP: Saw pod success Apr 9 00:39:04.227: INFO: Pod "pod-18c2c3c6-6377-497e-86e0-b0a8d689a883" satisfied condition "Succeeded or Failed" Apr 9 00:39:04.230: INFO: Trying to get logs from node latest-worker pod pod-18c2c3c6-6377-497e-86e0-b0a8d689a883 container test-container: STEP: delete the pod Apr 9 00:39:04.269: INFO: Waiting for pod pod-18c2c3c6-6377-497e-86e0-b0a8d689a883 to disappear Apr 9 00:39:04.279: INFO: Pod pod-18c2c3c6-6377-497e-86e0-b0a8d689a883 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:04.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7855" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3911,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:04.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 9 00:39:04.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-809' Apr 9 00:39:04.493: INFO: stderr: "" Apr 9 00:39:04.493: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 9 00:39:04.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-809' Apr 9 00:39:07.934: INFO: stderr: "" Apr 9 00:39:07.934: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:07.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-809" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":225,"skipped":3924,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:07.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 9 00:39:12.584: INFO: Successfully updated pod "labelsupdatee0f62321-93c3-417f-a436-a728190781ef" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:14.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6997" for this suite. • [SLOW TEST:6.669 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3935,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:14.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 9 00:39:14.672: INFO: Waiting up to 5m0s for pod "pod-720f1777-be07-4157-ae52-b32aad13a56f" in namespace "emptydir-1483" to be "Succeeded or Failed" Apr 9 00:39:14.675: INFO: Pod "pod-720f1777-be07-4157-ae52-b32aad13a56f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.314862ms Apr 9 00:39:16.679: INFO: Pod "pod-720f1777-be07-4157-ae52-b32aad13a56f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007318688s Apr 9 00:39:18.684: INFO: Pod "pod-720f1777-be07-4157-ae52-b32aad13a56f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01185252s STEP: Saw pod success Apr 9 00:39:18.684: INFO: Pod "pod-720f1777-be07-4157-ae52-b32aad13a56f" satisfied condition "Succeeded or Failed" Apr 9 00:39:18.687: INFO: Trying to get logs from node latest-worker2 pod pod-720f1777-be07-4157-ae52-b32aad13a56f container test-container: STEP: delete the pod Apr 9 00:39:18.706: INFO: Waiting for pod pod-720f1777-be07-4157-ae52-b32aad13a56f to disappear Apr 9 00:39:18.725: INFO: Pod pod-720f1777-be07-4157-ae52-b32aad13a56f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:18.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1483" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3944,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:18.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:39:18.859: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-8d5a2748-c3e6-4122-bb19-6c1315ef4e43" in namespace "security-context-test-5399" to be "Succeeded or Failed" Apr 9 00:39:18.890: INFO: Pod "alpine-nnp-false-8d5a2748-c3e6-4122-bb19-6c1315ef4e43": Phase="Pending", Reason="", readiness=false. Elapsed: 30.417592ms Apr 9 00:39:20.893: INFO: Pod "alpine-nnp-false-8d5a2748-c3e6-4122-bb19-6c1315ef4e43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034076816s Apr 9 00:39:22.902: INFO: Pod "alpine-nnp-false-8d5a2748-c3e6-4122-bb19-6c1315ef4e43": Phase="Running", Reason="", readiness=true. Elapsed: 4.042543718s Apr 9 00:39:24.904: INFO: Pod "alpine-nnp-false-8d5a2748-c3e6-4122-bb19-6c1315ef4e43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045065434s Apr 9 00:39:24.904: INFO: Pod "alpine-nnp-false-8d5a2748-c3e6-4122-bb19-6c1315ef4e43" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:24.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5399" for this suite. • [SLOW TEST:6.178 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3960,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:24.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:39:24.983: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 9 00:39:27.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6249 create -f -' Apr 9 00:39:30.821: INFO: stderr: "" Apr 9 00:39:30.821: INFO: stdout: "e2e-test-crd-publish-openapi-7441-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 9 00:39:30.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6249 delete e2e-test-crd-publish-openapi-7441-crds test-cr' Apr 9 00:39:30.913: INFO: stderr: "" Apr 9 00:39:30.913: INFO: stdout: "e2e-test-crd-publish-openapi-7441-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 9 00:39:30.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6249 apply -f -' Apr 9 00:39:31.157: INFO: stderr: "" Apr 9 00:39:31.157: INFO: stdout: "e2e-test-crd-publish-openapi-7441-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 9 00:39:31.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6249 delete e2e-test-crd-publish-openapi-7441-crds test-cr' Apr 9 00:39:31.282: INFO: stderr: "" Apr 9 00:39:31.282: INFO: stdout: "e2e-test-crd-publish-openapi-7441-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 9 00:39:31.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7441-crds' Apr 9 00:39:31.526: INFO: stderr: "" Apr 9 00:39:31.526: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7441-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:34.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6249" for this suite. • [SLOW TEST:9.498 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":229,"skipped":3963,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:34.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 9 00:39:37.493: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:37.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6103" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3963,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:37.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 9 00:39:37.763: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 9 00:39:37.789: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 9 00:39:37.789: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 9 00:39:37.802: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 9 00:39:37.802: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 9 00:39:37.886: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 9 00:39:37.886: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 9 00:39:45.311: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:45.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-2955" for this suite. • [SLOW TEST:7.844 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":231,"skipped":3972,"failed":0} [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:45.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-dffcf99a-4a3e-4ee8-b3d5-2a546c134117 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:49.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5518" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3972,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:49.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-3ffbf7da-a530-4773-bb8d-beda8e0c8064 STEP: Creating a pod to test consume secrets Apr 9 00:39:49.597: INFO: Waiting up to 5m0s for pod "pod-secrets-2d25cead-282f-4497-976d-455b10848e3b" in namespace "secrets-4304" to be "Succeeded or Failed" Apr 9 00:39:49.600: INFO: Pod "pod-secrets-2d25cead-282f-4497-976d-455b10848e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.660267ms Apr 9 00:39:51.604: INFO: Pod "pod-secrets-2d25cead-282f-4497-976d-455b10848e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007238135s Apr 9 00:39:53.608: INFO: Pod "pod-secrets-2d25cead-282f-4497-976d-455b10848e3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011187799s STEP: Saw pod success Apr 9 00:39:53.608: INFO: Pod "pod-secrets-2d25cead-282f-4497-976d-455b10848e3b" satisfied condition "Succeeded or Failed" Apr 9 00:39:53.612: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2d25cead-282f-4497-976d-455b10848e3b container secret-volume-test: STEP: delete the pod Apr 9 00:39:53.755: INFO: Waiting for pod pod-secrets-2d25cead-282f-4497-976d-455b10848e3b to disappear Apr 9 00:39:53.860: INFO: Pod pod-secrets-2d25cead-282f-4497-976d-455b10848e3b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:53.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4304" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":3981,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:53.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:39:53.914: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:54.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8819" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":234,"skipped":3992,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:54.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-ab83efd0-7947-4883-ba0d-c572e96ea634 STEP: Creating a pod to test consume configMaps Apr 9 00:39:54.647: INFO: Waiting up to 5m0s for pod "pod-configmaps-033c30d1-c1bd-49dc-9a96-afaff16ac59a" in namespace "configmap-8830" to be "Succeeded or Failed" Apr 9 00:39:54.658: INFO: Pod "pod-configmaps-033c30d1-c1bd-49dc-9a96-afaff16ac59a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.567582ms Apr 9 00:39:56.661: INFO: Pod "pod-configmaps-033c30d1-c1bd-49dc-9a96-afaff16ac59a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014553747s Apr 9 00:39:58.665: INFO: Pod "pod-configmaps-033c30d1-c1bd-49dc-9a96-afaff16ac59a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018652024s STEP: Saw pod success Apr 9 00:39:58.665: INFO: Pod "pod-configmaps-033c30d1-c1bd-49dc-9a96-afaff16ac59a" satisfied condition "Succeeded or Failed" Apr 9 00:39:58.668: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-033c30d1-c1bd-49dc-9a96-afaff16ac59a container configmap-volume-test: STEP: delete the pod Apr 9 00:39:58.730: INFO: Waiting for pod pod-configmaps-033c30d1-c1bd-49dc-9a96-afaff16ac59a to disappear Apr 9 00:39:58.736: INFO: Pod pod-configmaps-033c30d1-c1bd-49dc-9a96-afaff16ac59a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:39:58.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8830" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":4004,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:39:58.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 9 00:40:02.834: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8568 PodName:pod-sharedvolume-ae30e080-0983-4f9a-a325-b3bb6acd2239 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 9 00:40:02.834: INFO: >>> kubeConfig: /root/.kube/config I0409 00:40:02.866020 7 log.go:172] (0xc002d28b00) (0xc001363360) Create stream I0409 00:40:02.866049 7 log.go:172] (0xc002d28b00) (0xc001363360) Stream added, broadcasting: 1 I0409 00:40:02.868096 7 log.go:172] (0xc002d28b00) Reply frame received for 1 I0409 00:40:02.868131 7 log.go:172] (0xc002d28b00) (0xc000be08c0) Create stream I0409 00:40:02.868149 7 log.go:172] (0xc002d28b00) (0xc000be08c0) Stream added, broadcasting: 3 I0409 00:40:02.869254 7 log.go:172] (0xc002d28b00) Reply frame received for 3 I0409 00:40:02.869300 7 log.go:172] (0xc002d28b00) (0xc000be0dc0) Create stream I0409 00:40:02.869321 7 log.go:172] (0xc002d28b00) (0xc000be0dc0) Stream added, broadcasting: 5 I0409 00:40:02.870238 7 log.go:172] (0xc002d28b00) Reply frame received for 5 I0409 00:40:02.955386 7 log.go:172] (0xc002d28b00) Data frame received for 5 I0409 00:40:02.955417 7 log.go:172] (0xc000be0dc0) (5) Data frame handling I0409 00:40:02.955437 7 log.go:172] (0xc002d28b00) Data frame received for 3 I0409 00:40:02.955446 7 log.go:172] (0xc000be08c0) (3) Data frame handling I0409 00:40:02.955465 7 log.go:172] (0xc000be08c0) (3) Data frame sent I0409 00:40:02.955492 7 log.go:172] (0xc002d28b00) Data frame received for 3 I0409 00:40:02.955506 7 log.go:172] (0xc000be08c0) (3) Data frame handling I0409 00:40:02.957103 7 log.go:172] (0xc002d28b00) Data frame received for 1 I0409 00:40:02.957245 7 log.go:172] (0xc001363360) (1) Data frame handling I0409 00:40:02.957277 7 log.go:172] (0xc001363360) (1) Data frame sent I0409 00:40:02.957296 7 log.go:172] (0xc002d28b00) (0xc001363360) Stream removed, broadcasting: 1 I0409 00:40:02.957310 7 log.go:172] (0xc002d28b00) Go away received I0409 00:40:02.957472 7 log.go:172] (0xc002d28b00) (0xc001363360) Stream removed, broadcasting: 1 I0409 00:40:02.957493 7 log.go:172] (0xc002d28b00) (0xc000be08c0) Stream removed, broadcasting: 3 I0409 00:40:02.957508 7 log.go:172] (0xc002d28b00) (0xc000be0dc0) Stream removed, broadcasting: 5 Apr 9 00:40:02.957: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:40:02.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8568" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":236,"skipped":4025,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:40:02.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5944 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5944 I0409 00:40:03.144793 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5944, replica count: 2 I0409 00:40:06.195288 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0409 00:40:09.195568 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 9 00:40:09.195: INFO: Creating new exec pod Apr 9 00:40:14.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5944 execpodsd6df -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 9 00:40:14.449: INFO: stderr: "I0409 00:40:14.347269 2670 log.go:172] (0xc0000e4630) (0xc0002fcaa0) Create stream\nI0409 00:40:14.347328 2670 log.go:172] (0xc0000e4630) (0xc0002fcaa0) Stream added, broadcasting: 1\nI0409 00:40:14.350456 2670 log.go:172] (0xc0000e4630) Reply frame received for 1\nI0409 00:40:14.350512 2670 log.go:172] (0xc0000e4630) (0xc00093a000) Create stream\nI0409 00:40:14.350530 2670 log.go:172] (0xc0000e4630) (0xc00093a000) Stream added, broadcasting: 3\nI0409 00:40:14.351619 2670 log.go:172] (0xc0000e4630) Reply frame received for 3\nI0409 00:40:14.351654 2670 log.go:172] (0xc0000e4630) (0xc0005cb220) Create stream\nI0409 00:40:14.351664 2670 log.go:172] (0xc0000e4630) (0xc0005cb220) Stream added, broadcasting: 5\nI0409 00:40:14.352732 2670 log.go:172] (0xc0000e4630) Reply frame received for 5\nI0409 00:40:14.442128 2670 log.go:172] (0xc0000e4630) Data frame received for 5\nI0409 00:40:14.442161 2670 log.go:172] (0xc0005cb220) (5) Data frame handling\nI0409 00:40:14.442191 2670 log.go:172] (0xc0005cb220) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0409 00:40:14.442470 2670 log.go:172] (0xc0000e4630) Data frame received for 5\nI0409 00:40:14.442513 2670 log.go:172] (0xc0005cb220) (5) Data frame handling\nI0409 00:40:14.442550 2670 log.go:172] (0xc0005cb220) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0409 00:40:14.442672 2670 log.go:172] (0xc0000e4630) Data frame received for 5\nI0409 00:40:14.442705 2670 log.go:172] (0xc0005cb220) (5) Data frame handling\nI0409 00:40:14.442740 2670 log.go:172] (0xc0000e4630) Data frame received for 3\nI0409 00:40:14.442757 2670 log.go:172] (0xc00093a000) (3) Data frame handling\nI0409 00:40:14.444662 2670 log.go:172] (0xc0000e4630) Data frame received for 1\nI0409 00:40:14.444685 2670 log.go:172] (0xc0002fcaa0) (1) Data frame handling\nI0409 00:40:14.444697 2670 log.go:172] (0xc0002fcaa0) (1) Data frame sent\nI0409 00:40:14.444709 2670 log.go:172] (0xc0000e4630) (0xc0002fcaa0) Stream removed, broadcasting: 1\nI0409 00:40:14.444755 2670 log.go:172] (0xc0000e4630) Go away received\nI0409 00:40:14.445014 2670 log.go:172] (0xc0000e4630) (0xc0002fcaa0) Stream removed, broadcasting: 1\nI0409 00:40:14.445028 2670 log.go:172] (0xc0000e4630) (0xc00093a000) Stream removed, broadcasting: 3\nI0409 00:40:14.445037 2670 log.go:172] (0xc0000e4630) (0xc0005cb220) Stream removed, broadcasting: 5\n" Apr 9 00:40:14.449: INFO: stdout: "" Apr 9 00:40:14.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5944 execpodsd6df -- /bin/sh -x -c nc -zv -t -w 2 10.96.239.194 80' Apr 9 00:40:14.659: INFO: stderr: "I0409 00:40:14.576126 2690 log.go:172] (0xc00003bb80) (0xc00095e000) Create stream\nI0409 00:40:14.576187 2690 log.go:172] (0xc00003bb80) (0xc00095e000) Stream added, broadcasting: 1\nI0409 00:40:14.579246 2690 log.go:172] (0xc00003bb80) Reply frame received for 1\nI0409 00:40:14.579295 2690 log.go:172] (0xc00003bb80) (0xc000837400) Create stream\nI0409 00:40:14.579310 2690 log.go:172] (0xc00003bb80) (0xc000837400) Stream added, broadcasting: 3\nI0409 00:40:14.580229 2690 log.go:172] (0xc00003bb80) Reply frame received for 3\nI0409 00:40:14.580256 2690 log.go:172] (0xc00003bb80) (0xc000a20000) Create stream\nI0409 00:40:14.580266 2690 log.go:172] (0xc00003bb80) (0xc000a20000) Stream added, broadcasting: 5\nI0409 00:40:14.581631 2690 log.go:172] (0xc00003bb80) Reply frame received for 5\nI0409 00:40:14.651126 2690 log.go:172] (0xc00003bb80) Data frame received for 5\nI0409 00:40:14.651205 2690 log.go:172] (0xc000a20000) (5) Data frame handling\nI0409 00:40:14.651229 2690 log.go:172] (0xc000a20000) (5) Data frame sent\nI0409 00:40:14.651246 2690 log.go:172] (0xc00003bb80) Data frame received for 5\nI0409 00:40:14.651259 2690 log.go:172] (0xc000a20000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.239.194 80\nConnection to 10.96.239.194 80 port [tcp/http] succeeded!\nI0409 00:40:14.651310 2690 log.go:172] (0xc00003bb80) Data frame received for 3\nI0409 00:40:14.651351 2690 log.go:172] (0xc000837400) (3) Data frame handling\nI0409 00:40:14.654529 2690 log.go:172] (0xc00003bb80) Data frame received for 1\nI0409 00:40:14.654568 2690 log.go:172] (0xc00095e000) (1) Data frame handling\nI0409 00:40:14.654582 2690 log.go:172] (0xc00095e000) (1) Data frame sent\nI0409 00:40:14.654595 2690 log.go:172] (0xc00003bb80) (0xc00095e000) Stream removed, broadcasting: 1\nI0409 00:40:14.654879 2690 log.go:172] (0xc00003bb80) (0xc00095e000) Stream removed, broadcasting: 1\nI0409 00:40:14.654898 2690 log.go:172] (0xc00003bb80) (0xc000837400) Stream removed, broadcasting: 3\nI0409 00:40:14.655047 2690 log.go:172] (0xc00003bb80) (0xc000a20000) Stream removed, broadcasting: 5\n" Apr 9 00:40:14.659: INFO: stdout: "" Apr 9 00:40:14.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5944 execpodsd6df -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30223' Apr 9 00:40:14.875: INFO: stderr: "I0409 00:40:14.771510 2711 log.go:172] (0xc00003a6e0) (0xc00031ce60) Create stream\nI0409 00:40:14.771568 2711 log.go:172] (0xc00003a6e0) (0xc00031ce60) Stream added, broadcasting: 1\nI0409 00:40:14.773816 2711 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0409 00:40:14.773868 2711 log.go:172] (0xc00003a6e0) (0xc000876140) Create stream\nI0409 00:40:14.773886 2711 log.go:172] (0xc00003a6e0) (0xc000876140) Stream added, broadcasting: 3\nI0409 00:40:14.774832 2711 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0409 00:40:14.774860 2711 log.go:172] (0xc00003a6e0) (0xc0008761e0) Create stream\nI0409 00:40:14.774868 2711 log.go:172] (0xc00003a6e0) (0xc0008761e0) Stream added, broadcasting: 5\nI0409 00:40:14.775675 2711 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0409 00:40:14.868558 2711 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0409 00:40:14.868611 2711 log.go:172] (0xc000876140) (3) Data frame handling\nI0409 00:40:14.868646 2711 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0409 00:40:14.868682 2711 log.go:172] (0xc0008761e0) (5) Data frame handling\nI0409 00:40:14.868719 2711 log.go:172] (0xc0008761e0) (5) Data frame sent\nI0409 00:40:14.868740 2711 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0409 00:40:14.868756 2711 log.go:172] (0xc0008761e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30223\nConnection to 172.17.0.13 30223 port [tcp/30223] succeeded!\nI0409 00:40:14.870658 2711 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0409 00:40:14.870697 2711 log.go:172] (0xc00031ce60) (1) Data frame handling\nI0409 00:40:14.870720 2711 log.go:172] (0xc00031ce60) (1) Data frame sent\nI0409 00:40:14.870742 2711 log.go:172] (0xc00003a6e0) (0xc00031ce60) Stream removed, broadcasting: 1\nI0409 00:40:14.870821 2711 log.go:172] (0xc00003a6e0) Go away received\nI0409 00:40:14.871193 2711 log.go:172] (0xc00003a6e0) (0xc00031ce60) Stream removed, broadcasting: 1\nI0409 00:40:14.871212 2711 log.go:172] (0xc00003a6e0) (0xc000876140) Stream removed, broadcasting: 3\nI0409 00:40:14.871223 2711 log.go:172] (0xc00003a6e0) (0xc0008761e0) Stream removed, broadcasting: 5\n" Apr 9 00:40:14.875: INFO: stdout: "" Apr 9 00:40:14.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5944 execpodsd6df -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30223' Apr 9 00:40:15.096: INFO: stderr: "I0409 00:40:15.019878 2731 log.go:172] (0xc0000ea420) (0xc000bbc000) Create stream\nI0409 00:40:15.019948 2731 log.go:172] (0xc0000ea420) (0xc000bbc000) Stream added, broadcasting: 1\nI0409 00:40:15.022215 2731 log.go:172] (0xc0000ea420) Reply frame received for 1\nI0409 00:40:15.022281 2731 log.go:172] (0xc0000ea420) (0xc000bbc0a0) Create stream\nI0409 00:40:15.022302 2731 log.go:172] (0xc0000ea420) (0xc000bbc0a0) Stream added, broadcasting: 3\nI0409 00:40:15.023283 2731 log.go:172] (0xc0000ea420) Reply frame received for 3\nI0409 00:40:15.023317 2731 log.go:172] (0xc0000ea420) (0xc000bbc140) Create stream\nI0409 00:40:15.023328 2731 log.go:172] (0xc0000ea420) (0xc000bbc140) Stream added, broadcasting: 5\nI0409 00:40:15.024405 2731 log.go:172] (0xc0000ea420) Reply frame received for 5\nI0409 00:40:15.091062 2731 log.go:172] (0xc0000ea420) Data frame received for 3\nI0409 00:40:15.091190 2731 log.go:172] (0xc000bbc0a0) (3) Data frame handling\nI0409 00:40:15.091223 2731 log.go:172] (0xc0000ea420) Data frame received for 5\nI0409 00:40:15.091234 2731 log.go:172] (0xc000bbc140) (5) Data frame handling\nI0409 00:40:15.091245 2731 log.go:172] (0xc000bbc140) (5) Data frame sent\nI0409 00:40:15.091255 2731 log.go:172] (0xc0000ea420) Data frame received for 5\nI0409 00:40:15.091276 2731 log.go:172] (0xc000bbc140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30223\nConnection to 172.17.0.12 30223 port [tcp/30223] succeeded!\nI0409 00:40:15.092239 2731 log.go:172] (0xc0000ea420) Data frame received for 1\nI0409 00:40:15.092255 2731 log.go:172] (0xc000bbc000) (1) Data frame handling\nI0409 00:40:15.092262 2731 log.go:172] (0xc000bbc000) (1) Data frame sent\nI0409 00:40:15.092272 2731 log.go:172] (0xc0000ea420) (0xc000bbc000) Stream removed, broadcasting: 1\nI0409 00:40:15.092317 2731 log.go:172] (0xc0000ea420) Go away received\nI0409 00:40:15.092513 2731 log.go:172] (0xc0000ea420) (0xc000bbc000) Stream removed, broadcasting: 1\nI0409 00:40:15.092530 2731 log.go:172] (0xc0000ea420) (0xc000bbc0a0) Stream removed, broadcasting: 3\nI0409 00:40:15.092538 2731 log.go:172] (0xc0000ea420) (0xc000bbc140) Stream removed, broadcasting: 5\n" Apr 9 00:40:15.096: INFO: stdout: "" Apr 9 00:40:15.096: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:40:15.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5944" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.201 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":237,"skipped":4057,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:40:15.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:40:15.641: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:40:17.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989615, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989615, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989615, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989615, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:40:20.707: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:40:21.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6956" for this suite. STEP: Destroying namespace "webhook-6956-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.408 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":238,"skipped":4059,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:40:21.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-25e81d72-7e4a-46c9-8143-1a7ae0360466 in namespace container-probe-6089 Apr 9 00:40:25.739: INFO: Started pod liveness-25e81d72-7e4a-46c9-8143-1a7ae0360466 in namespace container-probe-6089 STEP: checking the pod's current state and verifying that restartCount is present Apr 9 00:40:25.742: INFO: Initial restart count of pod liveness-25e81d72-7e4a-46c9-8143-1a7ae0360466 is 0 Apr 9 00:40:39.844: INFO: Restart count of pod container-probe-6089/liveness-25e81d72-7e4a-46c9-8143-1a7ae0360466 is now 1 (14.101762163s elapsed) Apr 9 00:40:59.895: INFO: Restart count of pod container-probe-6089/liveness-25e81d72-7e4a-46c9-8143-1a7ae0360466 is now 2 (34.153413814s elapsed) Apr 9 00:41:19.937: INFO: Restart count of pod container-probe-6089/liveness-25e81d72-7e4a-46c9-8143-1a7ae0360466 is now 3 (54.194951362s elapsed) Apr 9 00:41:39.978: INFO: Restart count of pod container-probe-6089/liveness-25e81d72-7e4a-46c9-8143-1a7ae0360466 is now 4 (1m14.236260512s elapsed) Apr 9 00:42:40.102: INFO: Restart count of pod container-probe-6089/liveness-25e81d72-7e4a-46c9-8143-1a7ae0360466 is now 5 (2m14.360063737s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:42:40.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6089" for this suite. • [SLOW TEST:138.546 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4067,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:42:40.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 9 00:42:40.178: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 9 00:42:49.267: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:42:49.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9467" for this suite. • [SLOW TEST:9.157 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4071,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:42:49.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:42:53.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2008" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4072,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:42:53.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:42:54.005: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:42:56.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989774, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989774, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989774, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989773, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:42:59.082: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:43:11.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8474" for this suite. STEP: Destroying namespace "webhook-8474-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.946 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":242,"skipped":4078,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:43:11.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9855.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9855.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 00:43:17.454: INFO: DNS probes using dns-9855/dns-test-6b08c8a8-659a-4fe2-86cb-582e95ab1ccc succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:43:17.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9855" for this suite. • [SLOW TEST:6.257 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":243,"skipped":4082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:43:17.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:43:33.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2050" for this suite. • [SLOW TEST:16.411 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":244,"skipped":4119,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:43:34.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 9 00:43:34.054: INFO: Waiting up to 5m0s for pod "pod-2cdbd70c-78e6-43ec-adbf-45174c6bd6d0" in namespace "emptydir-7964" to be "Succeeded or Failed" Apr 9 00:43:34.068: INFO: Pod "pod-2cdbd70c-78e6-43ec-adbf-45174c6bd6d0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.650943ms Apr 9 00:43:36.072: INFO: Pod "pod-2cdbd70c-78e6-43ec-adbf-45174c6bd6d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018326513s Apr 9 00:43:38.076: INFO: Pod "pod-2cdbd70c-78e6-43ec-adbf-45174c6bd6d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022488533s STEP: Saw pod success Apr 9 00:43:38.076: INFO: Pod "pod-2cdbd70c-78e6-43ec-adbf-45174c6bd6d0" satisfied condition "Succeeded or Failed" Apr 9 00:43:38.079: INFO: Trying to get logs from node latest-worker2 pod pod-2cdbd70c-78e6-43ec-adbf-45174c6bd6d0 container test-container: STEP: delete the pod Apr 9 00:43:38.124: INFO: Waiting for pod pod-2cdbd70c-78e6-43ec-adbf-45174c6bd6d0 to disappear Apr 9 00:43:38.136: INFO: Pod pod-2cdbd70c-78e6-43ec-adbf-45174c6bd6d0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:43:38.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7964" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4137,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:43:38.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 9 00:43:38.232: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:38.248: INFO: Number of nodes with available pods: 0 Apr 9 00:43:38.248: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:43:39.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:39.274: INFO: Number of nodes with available pods: 0 Apr 9 00:43:39.274: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:43:40.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:40.323: INFO: Number of nodes with available pods: 0 Apr 9 00:43:40.323: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:43:41.253: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:41.256: INFO: Number of nodes with available pods: 1 Apr 9 00:43:41.256: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:43:42.258: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:42.263: INFO: Number of nodes with available pods: 2 Apr 9 00:43:42.263: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 9 00:43:42.287: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:42.289: INFO: Number of nodes with available pods: 1 Apr 9 00:43:42.289: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:43.294: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:43.297: INFO: Number of nodes with available pods: 1 Apr 9 00:43:43.297: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:44.312: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:44.316: INFO: Number of nodes with available pods: 1 Apr 9 00:43:44.316: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:45.300: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:45.305: INFO: Number of nodes with available pods: 1 Apr 9 00:43:45.305: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:46.294: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:46.299: INFO: Number of nodes with available pods: 1 Apr 9 00:43:46.299: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:47.295: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:47.299: INFO: Number of nodes with available pods: 1 Apr 9 00:43:47.299: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:48.294: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:48.297: INFO: Number of nodes with available pods: 1 Apr 9 00:43:48.297: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:49.295: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:49.299: INFO: Number of nodes with available pods: 1 Apr 9 00:43:49.299: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:50.294: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:50.297: INFO: Number of nodes with available pods: 1 Apr 9 00:43:50.297: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:51.295: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:51.299: INFO: Number of nodes with available pods: 1 Apr 9 00:43:51.299: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:52.295: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:52.299: INFO: Number of nodes with available pods: 1 Apr 9 00:43:52.299: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:53.294: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:53.298: INFO: Number of nodes with available pods: 1 Apr 9 00:43:53.298: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:54.294: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:54.312: INFO: Number of nodes with available pods: 1 Apr 9 00:43:54.312: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:55.294: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:55.298: INFO: Number of nodes with available pods: 1 Apr 9 00:43:55.298: INFO: Node latest-worker2 is running more than one daemon pod Apr 9 00:43:56.295: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 9 00:43:56.298: INFO: Number of nodes with available pods: 2 Apr 9 00:43:56.298: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6981, will wait for the garbage collector to delete the pods Apr 9 00:43:56.362: INFO: Deleting DaemonSet.extensions daemon-set took: 6.380402ms Apr 9 00:43:56.662: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.219916ms Apr 9 00:44:03.066: INFO: Number of nodes with available pods: 0 Apr 9 00:44:03.066: INFO: Number of running nodes: 0, number of available pods: 0 Apr 9 00:44:03.069: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6981/daemonsets","resourceVersion":"6554841"},"items":null} Apr 9 00:44:03.071: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6981/pods","resourceVersion":"6554841"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:44:03.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6981" for this suite. • [SLOW TEST:24.945 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":246,"skipped":4189,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:44:03.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:44:03.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ed95b5f-e56a-481f-af5d-1727d7e38758" in namespace "downward-api-9920" to be "Succeeded or Failed" Apr 9 00:44:03.182: INFO: Pod "downwardapi-volume-8ed95b5f-e56a-481f-af5d-1727d7e38758": Phase="Pending", Reason="", readiness=false. Elapsed: 20.169204ms Apr 9 00:44:05.187: INFO: Pod "downwardapi-volume-8ed95b5f-e56a-481f-af5d-1727d7e38758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025022255s Apr 9 00:44:07.191: INFO: Pod "downwardapi-volume-8ed95b5f-e56a-481f-af5d-1727d7e38758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029108673s STEP: Saw pod success Apr 9 00:44:07.191: INFO: Pod "downwardapi-volume-8ed95b5f-e56a-481f-af5d-1727d7e38758" satisfied condition "Succeeded or Failed" Apr 9 00:44:07.194: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8ed95b5f-e56a-481f-af5d-1727d7e38758 container client-container: STEP: delete the pod Apr 9 00:44:07.210: INFO: Waiting for pod downwardapi-volume-8ed95b5f-e56a-481f-af5d-1727d7e38758 to disappear Apr 9 00:44:07.214: INFO: Pod downwardapi-volume-8ed95b5f-e56a-481f-af5d-1727d7e38758 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:44:07.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9920" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4191,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:44:07.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 9 00:44:07.323: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:44:12.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-449" for this suite. • [SLOW TEST:5.398 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":248,"skipped":4213,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:44:12.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-d59fad5e-fcc4-4c33-b9f9-f56d00e462a4 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d59fad5e-fcc4-4c33-b9f9-f56d00e462a4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:44:18.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-424" for this suite. • [SLOW TEST:6.121 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4234,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:44:18.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 9 00:44:18.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 9 00:44:19.007: INFO: stderr: "" Apr 9 00:44:19.007: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:44:19.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2637" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":250,"skipped":4238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:44:19.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:44:19.077: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:44:20.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8120" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":251,"skipped":4272,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:44:20.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 9 00:44:24.900: INFO: Successfully updated pod "pod-update-a92265e5-b010-48bc-9e34-b5f7dfe6c184" STEP: verifying the updated pod is in kubernetes Apr 9 00:44:24.981: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:44:24.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8977" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4286,"failed":0} SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:44:24.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-6148 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6148 to expose endpoints map[] Apr 9 00:44:25.059: INFO: successfully validated that service multi-endpoint-test in namespace services-6148 exposes endpoints map[] (16.170476ms elapsed) STEP: Creating pod pod1 in namespace services-6148 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6148 to expose endpoints map[pod1:[100]] Apr 9 00:44:28.134: INFO: successfully validated that service multi-endpoint-test in namespace services-6148 exposes endpoints map[pod1:[100]] (3.063012912s elapsed) STEP: Creating pod pod2 in namespace services-6148 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6148 to expose endpoints map[pod1:[100] pod2:[101]] Apr 9 00:44:31.219: INFO: successfully validated that service multi-endpoint-test in namespace services-6148 exposes endpoints map[pod1:[100] pod2:[101]] (3.080610123s elapsed) STEP: Deleting pod pod1 in namespace services-6148 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6148 to expose endpoints map[pod2:[101]] Apr 9 00:44:32.251: INFO: successfully validated that service multi-endpoint-test in namespace services-6148 exposes endpoints map[pod2:[101]] (1.026940912s elapsed) STEP: Deleting pod pod2 in namespace services-6148 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6148 to expose endpoints map[] Apr 9 00:44:33.301: INFO: successfully validated that service multi-endpoint-test in namespace services-6148 exposes endpoints map[] (1.044409393s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:44:33.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6148" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.530 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":253,"skipped":4290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:44:33.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 9 00:44:33.956: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 9 00:44:35.973: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989873, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989873, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989874, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721989873, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 9 00:44:39.003: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:44:39.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6173-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:44:40.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7056" for this suite. STEP: Destroying namespace "webhook-7056-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.754 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":254,"skipped":4353,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:44:40.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 9 00:44:40.320: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0b629b3-c080-403d-bde4-a351389ba083" in namespace "downward-api-1314" to be "Succeeded or Failed" Apr 9 00:44:40.367: INFO: Pod "downwardapi-volume-a0b629b3-c080-403d-bde4-a351389ba083": Phase="Pending", Reason="", readiness=false. Elapsed: 46.907302ms Apr 9 00:44:42.371: INFO: Pod "downwardapi-volume-a0b629b3-c080-403d-bde4-a351389ba083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050186645s Apr 9 00:44:44.375: INFO: Pod "downwardapi-volume-a0b629b3-c080-403d-bde4-a351389ba083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054254553s STEP: Saw pod success Apr 9 00:44:44.375: INFO: Pod "downwardapi-volume-a0b629b3-c080-403d-bde4-a351389ba083" satisfied condition "Succeeded or Failed" Apr 9 00:44:44.378: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a0b629b3-c080-403d-bde4-a351389ba083 container client-container: STEP: delete the pod Apr 9 00:44:44.414: INFO: Waiting for pod downwardapi-volume-a0b629b3-c080-403d-bde4-a351389ba083 to disappear Apr 9 00:44:44.445: INFO: Pod downwardapi-volume-a0b629b3-c080-403d-bde4-a351389ba083 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:44:44.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1314" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:44:44.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5814 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5814 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5814 Apr 9 00:44:44.601: INFO: Found 0 stateful pods, waiting for 1 Apr 9 00:44:54.606: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 9 00:44:54.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5814 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 00:44:54.915: INFO: stderr: "I0409 00:44:54.742124 2772 log.go:172] (0xc000a00630) (0xc0006bf720) Create stream\nI0409 00:44:54.742178 2772 log.go:172] (0xc000a00630) (0xc0006bf720) Stream added, broadcasting: 1\nI0409 00:44:54.743961 2772 log.go:172] (0xc000a00630) Reply frame received for 1\nI0409 00:44:54.744006 2772 log.go:172] (0xc000a00630) (0xc0009d6000) Create stream\nI0409 00:44:54.744014 2772 log.go:172] (0xc000a00630) (0xc0009d6000) Stream added, broadcasting: 3\nI0409 00:44:54.744629 2772 log.go:172] (0xc000a00630) Reply frame received for 3\nI0409 00:44:54.744661 2772 log.go:172] (0xc000a00630) (0xc0009d60a0) Create stream\nI0409 00:44:54.744671 2772 log.go:172] (0xc000a00630) (0xc0009d60a0) Stream added, broadcasting: 5\nI0409 00:44:54.745497 2772 log.go:172] (0xc000a00630) Reply frame received for 5\nI0409 00:44:54.875822 2772 log.go:172] (0xc000a00630) Data frame received for 5\nI0409 00:44:54.875849 2772 log.go:172] (0xc0009d60a0) (5) Data frame handling\nI0409 00:44:54.875868 2772 log.go:172] (0xc0009d60a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 00:44:54.908486 2772 log.go:172] (0xc000a00630) Data frame received for 3\nI0409 00:44:54.908519 2772 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0409 00:44:54.908535 2772 log.go:172] (0xc0009d6000) (3) Data frame sent\nI0409 00:44:54.908801 2772 log.go:172] (0xc000a00630) Data frame received for 3\nI0409 00:44:54.908818 2772 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0409 00:44:54.909231 2772 log.go:172] (0xc000a00630) Data frame received for 5\nI0409 00:44:54.909251 2772 log.go:172] (0xc0009d60a0) (5) Data frame handling\nI0409 00:44:54.910505 2772 log.go:172] (0xc000a00630) Data frame received for 1\nI0409 00:44:54.910549 2772 log.go:172] (0xc0006bf720) (1) Data frame handling\nI0409 00:44:54.910567 2772 log.go:172] (0xc0006bf720) (1) Data frame sent\nI0409 00:44:54.910583 2772 log.go:172] (0xc000a00630) (0xc0006bf720) Stream removed, broadcasting: 1\nI0409 00:44:54.910596 2772 log.go:172] (0xc000a00630) Go away received\nI0409 00:44:54.910889 2772 log.go:172] (0xc000a00630) (0xc0006bf720) Stream removed, broadcasting: 1\nI0409 00:44:54.910902 2772 log.go:172] (0xc000a00630) (0xc0009d6000) Stream removed, broadcasting: 3\nI0409 00:44:54.910909 2772 log.go:172] (0xc000a00630) (0xc0009d60a0) Stream removed, broadcasting: 5\n" Apr 9 00:44:54.915: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 00:44:54.915: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 00:44:54.918: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 9 00:45:04.923: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 9 00:45:04.923: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 00:45:04.946: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999535s Apr 9 00:45:05.950: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.984400066s Apr 9 00:45:06.954: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.980215935s Apr 9 00:45:07.959: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.976471492s Apr 9 00:45:08.964: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.972154751s Apr 9 00:45:09.968: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.967123552s Apr 9 00:45:10.972: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.962982391s Apr 9 00:45:11.976: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.958902498s Apr 9 00:45:12.980: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.954367233s Apr 9 00:45:13.985: INFO: Verifying statefulset ss doesn't scale past 1 for another 950.463871ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5814 Apr 9 00:45:14.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5814 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 00:45:15.222: INFO: stderr: "I0409 00:45:15.130545 2794 log.go:172] (0xc0009040b0) (0xc000831400) Create stream\nI0409 00:45:15.130605 2794 log.go:172] (0xc0009040b0) (0xc000831400) Stream added, broadcasting: 1\nI0409 00:45:15.132920 2794 log.go:172] (0xc0009040b0) Reply frame received for 1\nI0409 00:45:15.132964 2794 log.go:172] (0xc0009040b0) (0xc000bda000) Create stream\nI0409 00:45:15.132990 2794 log.go:172] (0xc0009040b0) (0xc000bda000) Stream added, broadcasting: 3\nI0409 00:45:15.134044 2794 log.go:172] (0xc0009040b0) Reply frame received for 3\nI0409 00:45:15.134070 2794 log.go:172] (0xc0009040b0) (0xc0008315e0) Create stream\nI0409 00:45:15.134077 2794 log.go:172] (0xc0009040b0) (0xc0008315e0) Stream added, broadcasting: 5\nI0409 00:45:15.135036 2794 log.go:172] (0xc0009040b0) Reply frame received for 5\nI0409 00:45:15.216873 2794 log.go:172] (0xc0009040b0) Data frame received for 3\nI0409 00:45:15.216910 2794 log.go:172] (0xc000bda000) (3) Data frame handling\nI0409 00:45:15.216923 2794 log.go:172] (0xc000bda000) (3) Data frame sent\nI0409 00:45:15.216933 2794 log.go:172] (0xc0009040b0) Data frame received for 3\nI0409 00:45:15.216941 2794 log.go:172] (0xc000bda000) (3) Data frame handling\nI0409 00:45:15.216972 2794 log.go:172] (0xc0009040b0) Data frame received for 5\nI0409 00:45:15.216981 2794 log.go:172] (0xc0008315e0) (5) Data frame handling\nI0409 00:45:15.216988 2794 log.go:172] (0xc0008315e0) (5) Data frame sent\nI0409 00:45:15.216996 2794 log.go:172] (0xc0009040b0) Data frame received for 5\nI0409 00:45:15.217003 2794 log.go:172] (0xc0008315e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0409 00:45:15.218524 2794 log.go:172] (0xc0009040b0) Data frame received for 1\nI0409 00:45:15.218550 2794 log.go:172] (0xc000831400) (1) Data frame handling\nI0409 00:45:15.218561 2794 log.go:172] (0xc000831400) (1) Data frame sent\nI0409 00:45:15.218738 2794 log.go:172] (0xc0009040b0) (0xc000831400) Stream removed, broadcasting: 1\nI0409 00:45:15.219082 2794 log.go:172] (0xc0009040b0) (0xc000831400) Stream removed, broadcasting: 1\nI0409 00:45:15.219100 2794 log.go:172] (0xc0009040b0) (0xc000bda000) Stream removed, broadcasting: 3\nI0409 00:45:15.219108 2794 log.go:172] (0xc0009040b0) (0xc0008315e0) Stream removed, broadcasting: 5\n" Apr 9 00:45:15.222: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 00:45:15.222: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 00:45:15.226: INFO: Found 1 stateful pods, waiting for 3 Apr 9 00:45:25.231: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 9 00:45:25.231: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 9 00:45:25.231: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 9 00:45:25.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5814 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 00:45:25.439: INFO: stderr: "I0409 00:45:25.372756 2815 log.go:172] (0xc00003a4d0) (0xc000660be0) Create stream\nI0409 00:45:25.372814 2815 log.go:172] (0xc00003a4d0) (0xc000660be0) Stream added, broadcasting: 1\nI0409 00:45:25.374806 2815 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0409 00:45:25.374843 2815 log.go:172] (0xc00003a4d0) (0xc0007e2000) Create stream\nI0409 00:45:25.374855 2815 log.go:172] (0xc00003a4d0) (0xc0007e2000) Stream added, broadcasting: 3\nI0409 00:45:25.375597 2815 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0409 00:45:25.375634 2815 log.go:172] (0xc00003a4d0) (0xc000833360) Create stream\nI0409 00:45:25.375644 2815 log.go:172] (0xc00003a4d0) (0xc000833360) Stream added, broadcasting: 5\nI0409 00:45:25.376475 2815 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0409 00:45:25.433020 2815 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0409 00:45:25.433047 2815 log.go:172] (0xc000833360) (5) Data frame handling\nI0409 00:45:25.433060 2815 log.go:172] (0xc000833360) (5) Data frame sent\nI0409 00:45:25.433065 2815 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0409 00:45:25.433072 2815 log.go:172] (0xc000833360) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 00:45:25.433337 2815 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0409 00:45:25.433350 2815 log.go:172] (0xc0007e2000) (3) Data frame handling\nI0409 00:45:25.433357 2815 log.go:172] (0xc0007e2000) (3) Data frame sent\nI0409 00:45:25.433489 2815 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0409 00:45:25.433524 2815 log.go:172] (0xc0007e2000) (3) Data frame handling\nI0409 00:45:25.435073 2815 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0409 00:45:25.435091 2815 log.go:172] (0xc000660be0) (1) Data frame handling\nI0409 00:45:25.435103 2815 log.go:172] (0xc000660be0) (1) Data frame sent\nI0409 00:45:25.435115 2815 log.go:172] (0xc00003a4d0) (0xc000660be0) Stream removed, broadcasting: 1\nI0409 00:45:25.435270 2815 log.go:172] (0xc00003a4d0) Go away received\nI0409 00:45:25.435374 2815 log.go:172] (0xc00003a4d0) (0xc000660be0) Stream removed, broadcasting: 1\nI0409 00:45:25.435388 2815 log.go:172] (0xc00003a4d0) (0xc0007e2000) Stream removed, broadcasting: 3\nI0409 00:45:25.435396 2815 log.go:172] (0xc00003a4d0) (0xc000833360) Stream removed, broadcasting: 5\n" Apr 9 00:45:25.439: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 00:45:25.439: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 00:45:25.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5814 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 00:45:25.695: INFO: stderr: "I0409 00:45:25.567917 2836 log.go:172] (0xc000549a20) (0xc00097e000) Create stream\nI0409 00:45:25.567973 2836 log.go:172] (0xc000549a20) (0xc00097e000) Stream added, broadcasting: 1\nI0409 00:45:25.570755 2836 log.go:172] (0xc000549a20) Reply frame received for 1\nI0409 00:45:25.570794 2836 log.go:172] (0xc000549a20) (0xc000c38000) Create stream\nI0409 00:45:25.570804 2836 log.go:172] (0xc000549a20) (0xc000c38000) Stream added, broadcasting: 3\nI0409 00:45:25.571747 2836 log.go:172] (0xc000549a20) Reply frame received for 3\nI0409 00:45:25.571792 2836 log.go:172] (0xc000549a20) (0xc00097e0a0) Create stream\nI0409 00:45:25.571804 2836 log.go:172] (0xc000549a20) (0xc00097e0a0) Stream added, broadcasting: 5\nI0409 00:45:25.572836 2836 log.go:172] (0xc000549a20) Reply frame received for 5\nI0409 00:45:25.643490 2836 log.go:172] (0xc000549a20) Data frame received for 5\nI0409 00:45:25.643520 2836 log.go:172] (0xc00097e0a0) (5) Data frame handling\nI0409 00:45:25.643547 2836 log.go:172] (0xc00097e0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 00:45:25.687442 2836 log.go:172] (0xc000549a20) Data frame received for 3\nI0409 00:45:25.687610 2836 log.go:172] (0xc000c38000) (3) Data frame handling\nI0409 00:45:25.687744 2836 log.go:172] (0xc000c38000) (3) Data frame sent\nI0409 00:45:25.687776 2836 log.go:172] (0xc000549a20) Data frame received for 3\nI0409 00:45:25.687793 2836 log.go:172] (0xc000c38000) (3) Data frame handling\nI0409 00:45:25.687853 2836 log.go:172] (0xc000549a20) Data frame received for 5\nI0409 00:45:25.687889 2836 log.go:172] (0xc00097e0a0) (5) Data frame handling\nI0409 00:45:25.689740 2836 log.go:172] (0xc000549a20) Data frame received for 1\nI0409 00:45:25.689765 2836 log.go:172] (0xc00097e000) (1) Data frame handling\nI0409 00:45:25.689780 2836 log.go:172] (0xc00097e000) (1) Data frame sent\nI0409 00:45:25.689796 2836 log.go:172] (0xc000549a20) (0xc00097e000) Stream removed, broadcasting: 1\nI0409 00:45:25.689812 2836 log.go:172] (0xc000549a20) Go away received\nI0409 00:45:25.690190 2836 log.go:172] (0xc000549a20) (0xc00097e000) Stream removed, broadcasting: 1\nI0409 00:45:25.690213 2836 log.go:172] (0xc000549a20) (0xc000c38000) Stream removed, broadcasting: 3\nI0409 00:45:25.690230 2836 log.go:172] (0xc000549a20) (0xc00097e0a0) Stream removed, broadcasting: 5\n" Apr 9 00:45:25.695: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 00:45:25.695: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 00:45:25.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5814 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 9 00:45:25.954: INFO: stderr: "I0409 00:45:25.851455 2857 log.go:172] (0xc0000220b0) (0xc0009440a0) Create stream\nI0409 00:45:25.851512 2857 log.go:172] (0xc0000220b0) (0xc0009440a0) Stream added, broadcasting: 1\nI0409 00:45:25.853750 2857 log.go:172] (0xc0000220b0) Reply frame received for 1\nI0409 00:45:25.853792 2857 log.go:172] (0xc0000220b0) (0xc00094c000) Create stream\nI0409 00:45:25.853803 2857 log.go:172] (0xc0000220b0) (0xc00094c000) Stream added, broadcasting: 3\nI0409 00:45:25.854465 2857 log.go:172] (0xc0000220b0) Reply frame received for 3\nI0409 00:45:25.854492 2857 log.go:172] (0xc0000220b0) (0xc00094c0a0) Create stream\nI0409 00:45:25.854499 2857 log.go:172] (0xc0000220b0) (0xc00094c0a0) Stream added, broadcasting: 5\nI0409 00:45:25.855153 2857 log.go:172] (0xc0000220b0) Reply frame received for 5\nI0409 00:45:25.916705 2857 log.go:172] (0xc0000220b0) Data frame received for 5\nI0409 00:45:25.916738 2857 log.go:172] (0xc00094c0a0) (5) Data frame handling\nI0409 00:45:25.916762 2857 log.go:172] (0xc00094c0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0409 00:45:25.945728 2857 log.go:172] (0xc0000220b0) Data frame received for 3\nI0409 00:45:25.945778 2857 log.go:172] (0xc00094c000) (3) Data frame handling\nI0409 00:45:25.945828 2857 log.go:172] (0xc00094c000) (3) Data frame sent\nI0409 00:45:25.945858 2857 log.go:172] (0xc0000220b0) Data frame received for 3\nI0409 00:45:25.945870 2857 log.go:172] (0xc00094c000) (3) Data frame handling\nI0409 00:45:25.946304 2857 log.go:172] (0xc0000220b0) Data frame received for 5\nI0409 00:45:25.946320 2857 log.go:172] (0xc00094c0a0) (5) Data frame handling\nI0409 00:45:25.948548 2857 log.go:172] (0xc0000220b0) Data frame received for 1\nI0409 00:45:25.948560 2857 log.go:172] (0xc0009440a0) (1) Data frame handling\nI0409 00:45:25.948567 2857 log.go:172] (0xc0009440a0) (1) Data frame sent\nI0409 00:45:25.948577 2857 log.go:172] (0xc0000220b0) (0xc0009440a0) Stream removed, broadcasting: 1\nI0409 00:45:25.948590 2857 log.go:172] (0xc0000220b0) Go away received\nI0409 00:45:25.949031 2857 log.go:172] (0xc0000220b0) (0xc0009440a0) Stream removed, broadcasting: 1\nI0409 00:45:25.949066 2857 log.go:172] (0xc0000220b0) (0xc00094c000) Stream removed, broadcasting: 3\nI0409 00:45:25.949094 2857 log.go:172] (0xc0000220b0) (0xc00094c0a0) Stream removed, broadcasting: 5\n" Apr 9 00:45:25.954: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 9 00:45:25.954: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 9 00:45:25.954: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 00:45:25.961: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 9 00:45:35.968: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 9 00:45:35.968: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 9 00:45:35.968: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 9 00:45:36.003: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999573s Apr 9 00:45:37.009: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978904149s Apr 9 00:45:38.014: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972875601s Apr 9 00:45:39.019: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.968225296s Apr 9 00:45:40.024: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.963068767s Apr 9 00:45:41.030: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.958073027s Apr 9 00:45:42.035: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.952456089s Apr 9 00:45:43.040: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.947363121s Apr 9 00:45:44.045: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.942507145s Apr 9 00:45:45.050: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.463971ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5814 Apr 9 00:45:46.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5814 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 00:45:46.264: INFO: stderr: "I0409 00:45:46.171080 2878 log.go:172] (0xc000b1edc0) (0xc000b5c5a0) Create stream\nI0409 00:45:46.171135 2878 log.go:172] (0xc000b1edc0) (0xc000b5c5a0) Stream added, broadcasting: 1\nI0409 00:45:46.173781 2878 log.go:172] (0xc000b1edc0) Reply frame received for 1\nI0409 00:45:46.173816 2878 log.go:172] (0xc000b1edc0) (0xc000b5c640) Create stream\nI0409 00:45:46.173843 2878 log.go:172] (0xc000b1edc0) (0xc000b5c640) Stream added, broadcasting: 3\nI0409 00:45:46.174730 2878 log.go:172] (0xc000b1edc0) Reply frame received for 3\nI0409 00:45:46.174767 2878 log.go:172] (0xc000b1edc0) (0xc000a041e0) Create stream\nI0409 00:45:46.174782 2878 log.go:172] (0xc000b1edc0) (0xc000a041e0) Stream added, broadcasting: 5\nI0409 00:45:46.175821 2878 log.go:172] (0xc000b1edc0) Reply frame received for 5\nI0409 00:45:46.257708 2878 log.go:172] (0xc000b1edc0) Data frame received for 3\nI0409 00:45:46.257741 2878 log.go:172] (0xc000b5c640) (3) Data frame handling\nI0409 00:45:46.257770 2878 log.go:172] (0xc000b1edc0) Data frame received for 5\nI0409 00:45:46.257833 2878 log.go:172] (0xc000a041e0) (5) Data frame handling\nI0409 00:45:46.257870 2878 log.go:172] (0xc000a041e0) (5) Data frame sent\nI0409 00:45:46.257891 2878 log.go:172] (0xc000b1edc0) Data frame received for 5\nI0409 00:45:46.257914 2878 log.go:172] (0xc000a041e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0409 00:45:46.257966 2878 log.go:172] (0xc000b5c640) (3) Data frame sent\nI0409 00:45:46.257991 2878 log.go:172] (0xc000b1edc0) Data frame received for 3\nI0409 00:45:46.258006 2878 log.go:172] (0xc000b5c640) (3) Data frame handling\nI0409 00:45:46.259524 2878 log.go:172] (0xc000b1edc0) Data frame received for 1\nI0409 00:45:46.259551 2878 log.go:172] (0xc000b5c5a0) (1) Data frame handling\nI0409 00:45:46.259570 2878 log.go:172] (0xc000b5c5a0) (1) Data frame sent\nI0409 00:45:46.259592 2878 log.go:172] (0xc000b1edc0) (0xc000b5c5a0) Stream removed, broadcasting: 1\nI0409 00:45:46.259608 2878 log.go:172] (0xc000b1edc0) Go away received\nI0409 00:45:46.260063 2878 log.go:172] (0xc000b1edc0) (0xc000b5c5a0) Stream removed, broadcasting: 1\nI0409 00:45:46.260086 2878 log.go:172] (0xc000b1edc0) (0xc000b5c640) Stream removed, broadcasting: 3\nI0409 00:45:46.260096 2878 log.go:172] (0xc000b1edc0) (0xc000a041e0) Stream removed, broadcasting: 5\n" Apr 9 00:45:46.264: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 00:45:46.264: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 00:45:46.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 00:45:46.468: INFO: stderr: "I0409 00:45:46.394346 2898 log.go:172] (0xc0003d26e0) (0xc0006234a0) Create stream\nI0409 00:45:46.394405 2898 log.go:172] (0xc0003d26e0) (0xc0006234a0) Stream added, broadcasting: 1\nI0409 00:45:46.397091 2898 log.go:172] (0xc0003d26e0) Reply frame received for 1\nI0409 00:45:46.397314 2898 log.go:172] (0xc0003d26e0) (0xc0005275e0) Create stream\nI0409 00:45:46.397342 2898 log.go:172] (0xc0003d26e0) (0xc0005275e0) Stream added, broadcasting: 3\nI0409 00:45:46.398376 2898 log.go:172] (0xc0003d26e0) Reply frame received for 3\nI0409 00:45:46.398405 2898 log.go:172] (0xc0003d26e0) (0xc0003d0000) Create stream\nI0409 00:45:46.398415 2898 log.go:172] (0xc0003d26e0) (0xc0003d0000) Stream added, broadcasting: 5\nI0409 00:45:46.399290 2898 log.go:172] (0xc0003d26e0) Reply frame received for 5\nI0409 00:45:46.461060 2898 log.go:172] (0xc0003d26e0) Data frame received for 5\nI0409 00:45:46.461342 2898 log.go:172] (0xc0003d0000) (5) Data frame handling\nI0409 00:45:46.461371 2898 log.go:172] (0xc0003d0000) (5) Data frame sent\nI0409 00:45:46.461384 2898 log.go:172] (0xc0003d26e0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0409 00:45:46.461396 2898 log.go:172] (0xc0003d0000) (5) Data frame handling\nI0409 00:45:46.461432 2898 log.go:172] (0xc0003d26e0) Data frame received for 3\nI0409 00:45:46.461459 2898 log.go:172] (0xc0005275e0) (3) Data frame handling\nI0409 00:45:46.461490 2898 log.go:172] (0xc0005275e0) (3) Data frame sent\nI0409 00:45:46.461507 2898 log.go:172] (0xc0003d26e0) Data frame received for 3\nI0409 00:45:46.461525 2898 log.go:172] (0xc0005275e0) (3) Data frame handling\nI0409 00:45:46.463275 2898 log.go:172] (0xc0003d26e0) Data frame received for 1\nI0409 00:45:46.463302 2898 log.go:172] (0xc0006234a0) (1) Data frame handling\nI0409 00:45:46.463328 2898 log.go:172] (0xc0006234a0) (1) Data frame sent\nI0409 00:45:46.463365 2898 log.go:172] (0xc0003d26e0) (0xc0006234a0) Stream removed, broadcasting: 1\nI0409 00:45:46.463480 2898 log.go:172] (0xc0003d26e0) Go away received\nI0409 00:45:46.463735 2898 log.go:172] (0xc0003d26e0) (0xc0006234a0) Stream removed, broadcasting: 1\nI0409 00:45:46.463757 2898 log.go:172] (0xc0003d26e0) (0xc0005275e0) Stream removed, broadcasting: 3\nI0409 00:45:46.463768 2898 log.go:172] (0xc0003d26e0) (0xc0003d0000) Stream removed, broadcasting: 5\n" Apr 9 00:45:46.468: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 00:45:46.468: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 00:45:46.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5814 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 9 00:45:46.645: INFO: stderr: "I0409 00:45:46.582454 2920 log.go:172] (0xc000af06e0) (0xc0006e7360) Create stream\nI0409 00:45:46.582500 2920 log.go:172] (0xc000af06e0) (0xc0006e7360) Stream added, broadcasting: 1\nI0409 00:45:46.584233 2920 log.go:172] (0xc000af06e0) Reply frame received for 1\nI0409 00:45:46.584284 2920 log.go:172] (0xc000af06e0) (0xc00078c000) Create stream\nI0409 00:45:46.584302 2920 log.go:172] (0xc000af06e0) (0xc00078c000) Stream added, broadcasting: 3\nI0409 00:45:46.585043 2920 log.go:172] (0xc000af06e0) Reply frame received for 3\nI0409 00:45:46.585076 2920 log.go:172] (0xc000af06e0) (0xc00078c1e0) Create stream\nI0409 00:45:46.585094 2920 log.go:172] (0xc000af06e0) (0xc00078c1e0) Stream added, broadcasting: 5\nI0409 00:45:46.585842 2920 log.go:172] (0xc000af06e0) Reply frame received for 5\nI0409 00:45:46.638898 2920 log.go:172] (0xc000af06e0) Data frame received for 5\nI0409 00:45:46.638971 2920 log.go:172] (0xc00078c1e0) (5) Data frame handling\nI0409 00:45:46.638996 2920 log.go:172] (0xc00078c1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0409 00:45:46.639028 2920 log.go:172] (0xc000af06e0) Data frame received for 5\nI0409 00:45:46.639048 2920 log.go:172] (0xc00078c1e0) (5) Data frame handling\nI0409 00:45:46.639083 2920 log.go:172] (0xc000af06e0) Data frame received for 3\nI0409 00:45:46.639116 2920 log.go:172] (0xc00078c000) (3) Data frame handling\nI0409 00:45:46.639137 2920 log.go:172] (0xc00078c000) (3) Data frame sent\nI0409 00:45:46.639154 2920 log.go:172] (0xc000af06e0) Data frame received for 3\nI0409 00:45:46.639174 2920 log.go:172] (0xc00078c000) (3) Data frame handling\nI0409 00:45:46.640059 2920 log.go:172] (0xc000af06e0) Data frame received for 1\nI0409 00:45:46.640095 2920 log.go:172] (0xc0006e7360) (1) Data frame handling\nI0409 00:45:46.640122 2920 log.go:172] (0xc0006e7360) (1) Data frame sent\nI0409 00:45:46.640155 2920 log.go:172] (0xc000af06e0) (0xc0006e7360) Stream removed, broadcasting: 1\nI0409 00:45:46.640284 2920 log.go:172] (0xc000af06e0) Go away received\nI0409 00:45:46.640630 2920 log.go:172] (0xc000af06e0) (0xc0006e7360) Stream removed, broadcasting: 1\nI0409 00:45:46.640651 2920 log.go:172] (0xc000af06e0) (0xc00078c000) Stream removed, broadcasting: 3\nI0409 00:45:46.640662 2920 log.go:172] (0xc000af06e0) (0xc00078c1e0) Stream removed, broadcasting: 5\n" Apr 9 00:45:46.645: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 9 00:45:46.645: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 9 00:45:46.645: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 9 00:46:06.661: INFO: Deleting all statefulset in ns statefulset-5814 Apr 9 00:46:06.664: INFO: Scaling statefulset ss to 0 Apr 9 00:46:06.677: INFO: Waiting for statefulset status.replicas updated to 0 Apr 9 00:46:06.680: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:46:06.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5814" for this suite. • [SLOW TEST:82.233 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":256,"skipped":4380,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:46:06.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-3f7bb5ae-08c5-499a-9603-566de4c65ab8 STEP: Creating secret with name secret-projected-all-test-volume-e92a07aa-2085-41e2-9097-fbc964e43e8d STEP: Creating a pod to test Check all projections for projected volume plugin Apr 9 00:46:06.770: INFO: Waiting up to 5m0s for pod "projected-volume-1ecf3725-7dd8-4a70-947b-6883227a4872" in namespace "projected-4385" to be "Succeeded or Failed" Apr 9 00:46:06.774: INFO: Pod "projected-volume-1ecf3725-7dd8-4a70-947b-6883227a4872": Phase="Pending", Reason="", readiness=false. Elapsed: 3.774234ms Apr 9 00:46:08.778: INFO: Pod "projected-volume-1ecf3725-7dd8-4a70-947b-6883227a4872": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007899641s Apr 9 00:46:10.782: INFO: Pod "projected-volume-1ecf3725-7dd8-4a70-947b-6883227a4872": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012440825s STEP: Saw pod success Apr 9 00:46:10.782: INFO: Pod "projected-volume-1ecf3725-7dd8-4a70-947b-6883227a4872" satisfied condition "Succeeded or Failed" Apr 9 00:46:10.786: INFO: Trying to get logs from node latest-worker2 pod projected-volume-1ecf3725-7dd8-4a70-947b-6883227a4872 container projected-all-volume-test: STEP: delete the pod Apr 9 00:46:10.818: INFO: Waiting for pod projected-volume-1ecf3725-7dd8-4a70-947b-6883227a4872 to disappear Apr 9 00:46:10.824: INFO: Pod projected-volume-1ecf3725-7dd8-4a70-947b-6883227a4872 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:46:10.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4385" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4397,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:46:10.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-5f11018e-37ba-4312-9369-cb2b11d27eba STEP: Creating a pod to test consume configMaps Apr 9 00:46:10.901: INFO: Waiting up to 5m0s for pod "pod-configmaps-3816f4e2-12ea-4f34-a369-e4278d907b82" in namespace "configmap-5258" to be "Succeeded or Failed" Apr 9 00:46:10.920: INFO: Pod "pod-configmaps-3816f4e2-12ea-4f34-a369-e4278d907b82": Phase="Pending", Reason="", readiness=false. Elapsed: 19.827182ms Apr 9 00:46:12.925: INFO: Pod "pod-configmaps-3816f4e2-12ea-4f34-a369-e4278d907b82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024605331s Apr 9 00:46:14.930: INFO: Pod "pod-configmaps-3816f4e2-12ea-4f34-a369-e4278d907b82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0291631s STEP: Saw pod success Apr 9 00:46:14.930: INFO: Pod "pod-configmaps-3816f4e2-12ea-4f34-a369-e4278d907b82" satisfied condition "Succeeded or Failed" Apr 9 00:46:14.933: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3816f4e2-12ea-4f34-a369-e4278d907b82 container configmap-volume-test: STEP: delete the pod Apr 9 00:46:14.967: INFO: Waiting for pod pod-configmaps-3816f4e2-12ea-4f34-a369-e4278d907b82 to disappear Apr 9 00:46:14.974: INFO: Pod pod-configmaps-3816f4e2-12ea-4f34-a369-e4278d907b82 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:46:14.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5258" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4397,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:46:14.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:46:15.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5247" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":259,"skipped":4412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:46:15.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-d351ec29-a43f-4c5f-9237-ccc51e3b38bc STEP: Creating a pod to test consume configMaps Apr 9 00:46:15.204: INFO: Waiting up to 5m0s for pod "pod-configmaps-a6a53800-fd64-485e-b5a9-f8077279aa0d" in namespace "configmap-2401" to be "Succeeded or Failed" Apr 9 00:46:15.208: INFO: Pod "pod-configmaps-a6a53800-fd64-485e-b5a9-f8077279aa0d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.003754ms Apr 9 00:46:17.231: INFO: Pod "pod-configmaps-a6a53800-fd64-485e-b5a9-f8077279aa0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026212824s Apr 9 00:46:19.255: INFO: Pod "pod-configmaps-a6a53800-fd64-485e-b5a9-f8077279aa0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050195775s STEP: Saw pod success Apr 9 00:46:19.255: INFO: Pod "pod-configmaps-a6a53800-fd64-485e-b5a9-f8077279aa0d" satisfied condition "Succeeded or Failed" Apr 9 00:46:19.257: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-a6a53800-fd64-485e-b5a9-f8077279aa0d container configmap-volume-test: STEP: delete the pod Apr 9 00:46:19.303: INFO: Waiting for pod pod-configmaps-a6a53800-fd64-485e-b5a9-f8077279aa0d to disappear Apr 9 00:46:19.307: INFO: Pod pod-configmaps-a6a53800-fd64-485e-b5a9-f8077279aa0d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:46:19.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2401" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:46:19.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 9 00:46:19.395: INFO: Waiting up to 5m0s for pod "downward-api-6613a77a-f0be-4edf-97b0-95f1e38a4743" in namespace "downward-api-3919" to be "Succeeded or Failed" Apr 9 00:46:19.403: INFO: Pod "downward-api-6613a77a-f0be-4edf-97b0-95f1e38a4743": Phase="Pending", Reason="", readiness=false. Elapsed: 7.894436ms Apr 9 00:46:21.406: INFO: Pod "downward-api-6613a77a-f0be-4edf-97b0-95f1e38a4743": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01101425s Apr 9 00:46:23.410: INFO: Pod "downward-api-6613a77a-f0be-4edf-97b0-95f1e38a4743": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015150323s STEP: Saw pod success Apr 9 00:46:23.410: INFO: Pod "downward-api-6613a77a-f0be-4edf-97b0-95f1e38a4743" satisfied condition "Succeeded or Failed" Apr 9 00:46:23.413: INFO: Trying to get logs from node latest-worker2 pod downward-api-6613a77a-f0be-4edf-97b0-95f1e38a4743 container dapi-container: STEP: delete the pod Apr 9 00:46:23.479: INFO: Waiting for pod downward-api-6613a77a-f0be-4edf-97b0-95f1e38a4743 to disappear Apr 9 00:46:23.487: INFO: Pod downward-api-6613a77a-f0be-4edf-97b0-95f1e38a4743 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:46:23.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3919" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:46:23.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 9 00:46:23.617: INFO: Waiting up to 5m0s for pod "downward-api-4b82412a-287e-4f9b-b9f7-c4f59fe9052a" in namespace "downward-api-2224" to be "Succeeded or Failed" Apr 9 00:46:23.625: INFO: Pod "downward-api-4b82412a-287e-4f9b-b9f7-c4f59fe9052a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.001956ms Apr 9 00:46:25.629: INFO: Pod "downward-api-4b82412a-287e-4f9b-b9f7-c4f59fe9052a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012208409s Apr 9 00:46:27.633: INFO: Pod "downward-api-4b82412a-287e-4f9b-b9f7-c4f59fe9052a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016178763s STEP: Saw pod success Apr 9 00:46:27.633: INFO: Pod "downward-api-4b82412a-287e-4f9b-b9f7-c4f59fe9052a" satisfied condition "Succeeded or Failed" Apr 9 00:46:27.636: INFO: Trying to get logs from node latest-worker2 pod downward-api-4b82412a-287e-4f9b-b9f7-c4f59fe9052a container dapi-container: STEP: delete the pod Apr 9 00:46:27.728: INFO: Waiting for pod downward-api-4b82412a-287e-4f9b-b9f7-c4f59fe9052a to disappear Apr 9 00:46:27.731: INFO: Pod downward-api-4b82412a-287e-4f9b-b9f7-c4f59fe9052a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:46:27.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2224" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4528,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:46:27.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:46:27.781: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 9 00:46:30.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8213 create -f -' Apr 9 00:46:33.591: INFO: stderr: "" Apr 9 00:46:33.591: INFO: stdout: "e2e-test-crd-publish-openapi-4660-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 9 00:46:33.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8213 delete e2e-test-crd-publish-openapi-4660-crds test-cr' Apr 9 00:46:33.703: INFO: stderr: "" Apr 9 00:46:33.703: INFO: stdout: "e2e-test-crd-publish-openapi-4660-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 9 00:46:33.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8213 apply -f -' Apr 9 00:46:34.005: INFO: stderr: "" Apr 9 00:46:34.006: INFO: stdout: "e2e-test-crd-publish-openapi-4660-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 9 00:46:34.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8213 delete e2e-test-crd-publish-openapi-4660-crds test-cr' Apr 9 00:46:34.123: INFO: stderr: "" Apr 9 00:46:34.123: INFO: stdout: "e2e-test-crd-publish-openapi-4660-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 9 00:46:34.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4660-crds' Apr 9 00:46:34.387: INFO: stderr: "" Apr 9 00:46:34.387: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4660-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:46:37.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8213" for this suite. • [SLOW TEST:9.541 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":263,"skipped":4545,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:46:37.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-47f6950c-4efd-4033-9c0a-feb37c54a899 STEP: Creating a pod to test consume secrets Apr 9 00:46:37.336: INFO: Waiting up to 5m0s for pod "pod-secrets-e1b67f60-def8-4fc3-8f4b-edea8e7bf60d" in namespace "secrets-4968" to be "Succeeded or Failed" Apr 9 00:46:37.346: INFO: Pod "pod-secrets-e1b67f60-def8-4fc3-8f4b-edea8e7bf60d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.652111ms Apr 9 00:46:39.350: INFO: Pod "pod-secrets-e1b67f60-def8-4fc3-8f4b-edea8e7bf60d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013351776s Apr 9 00:46:41.354: INFO: Pod "pod-secrets-e1b67f60-def8-4fc3-8f4b-edea8e7bf60d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017859608s STEP: Saw pod success Apr 9 00:46:41.354: INFO: Pod "pod-secrets-e1b67f60-def8-4fc3-8f4b-edea8e7bf60d" satisfied condition "Succeeded or Failed" Apr 9 00:46:41.358: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e1b67f60-def8-4fc3-8f4b-edea8e7bf60d container secret-volume-test: STEP: delete the pod Apr 9 00:46:41.376: INFO: Waiting for pod pod-secrets-e1b67f60-def8-4fc3-8f4b-edea8e7bf60d to disappear Apr 9 00:46:41.379: INFO: Pod pod-secrets-e1b67f60-def8-4fc3-8f4b-edea8e7bf60d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:46:41.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4968" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4559,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:46:41.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 9 00:46:41.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2762' Apr 9 00:46:41.813: INFO: stderr: "" Apr 9 00:46:41.813: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 9 00:46:41.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2762' Apr 9 00:46:41.931: INFO: stderr: "" Apr 9 00:46:41.932: INFO: stdout: "update-demo-nautilus-7gbrm update-demo-nautilus-7jbg4 " Apr 9 00:46:41.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gbrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2762' Apr 9 00:46:42.023: INFO: stderr: "" Apr 9 00:46:42.023: INFO: stdout: "" Apr 9 00:46:42.023: INFO: update-demo-nautilus-7gbrm is created but not running Apr 9 00:46:47.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2762' Apr 9 00:46:47.114: INFO: stderr: "" Apr 9 00:46:47.114: INFO: stdout: "update-demo-nautilus-7gbrm update-demo-nautilus-7jbg4 " Apr 9 00:46:47.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gbrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2762' Apr 9 00:46:47.204: INFO: stderr: "" Apr 9 00:46:47.204: INFO: stdout: "true" Apr 9 00:46:47.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gbrm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2762' Apr 9 00:46:47.304: INFO: stderr: "" Apr 9 00:46:47.304: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 00:46:47.304: INFO: validating pod update-demo-nautilus-7gbrm Apr 9 00:46:47.308: INFO: got data: { "image": "nautilus.jpg" } Apr 9 00:46:47.309: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 00:46:47.309: INFO: update-demo-nautilus-7gbrm is verified up and running Apr 9 00:46:47.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jbg4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2762' Apr 9 00:46:47.411: INFO: stderr: "" Apr 9 00:46:47.411: INFO: stdout: "true" Apr 9 00:46:47.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jbg4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2762' Apr 9 00:46:47.507: INFO: stderr: "" Apr 9 00:46:47.507: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 9 00:46:47.507: INFO: validating pod update-demo-nautilus-7jbg4 Apr 9 00:46:47.511: INFO: got data: { "image": "nautilus.jpg" } Apr 9 00:46:47.511: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 9 00:46:47.511: INFO: update-demo-nautilus-7jbg4 is verified up and running STEP: using delete to clean up resources Apr 9 00:46:47.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2762' Apr 9 00:46:47.599: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 9 00:46:47.599: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 9 00:46:47.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2762' Apr 9 00:46:47.702: INFO: stderr: "No resources found in kubectl-2762 namespace.\n" Apr 9 00:46:47.702: INFO: stdout: "" Apr 9 00:46:47.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2762 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 9 00:46:47.798: INFO: stderr: "" Apr 9 00:46:47.798: INFO: stdout: "update-demo-nautilus-7gbrm\nupdate-demo-nautilus-7jbg4\n" Apr 9 00:46:48.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2762' Apr 9 00:46:48.419: INFO: stderr: "No resources found in kubectl-2762 namespace.\n" Apr 9 00:46:48.419: INFO: stdout: "" Apr 9 00:46:48.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2762 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 9 00:46:48.518: INFO: stderr: "" Apr 9 00:46:48.518: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:46:48.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2762" for this suite. • [SLOW TEST:7.140 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":265,"skipped":4563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:46:48.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5231.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5231.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5231.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 9 00:46:54.709: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:54.712: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:54.715: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:54.718: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:54.729: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:54.732: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:54.735: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:54.738: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:54.744: INFO: Lookups using dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local] Apr 9 00:46:59.749: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:59.753: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:59.756: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:59.760: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:59.770: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:59.774: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:59.777: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:59.780: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:46:59.786: INFO: Lookups using dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local] Apr 9 00:47:04.750: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:04.754: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:04.757: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:04.760: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:04.776: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:04.793: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:04.796: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:04.798: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:04.802: INFO: Lookups using dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local] Apr 9 00:47:09.749: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:09.752: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:09.756: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:09.760: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:09.785: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:09.788: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:09.790: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:09.793: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:09.798: INFO: Lookups using dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local] Apr 9 00:47:14.750: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:14.755: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:14.758: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:14.761: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:14.772: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:14.775: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:14.793: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:14.797: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:14.803: INFO: Lookups using dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local] Apr 9 00:47:19.750: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:19.753: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:19.756: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:19.759: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:19.766: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:19.768: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:19.771: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:19.774: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local from pod dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d: the server could not find the requested resource (get pods dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d) Apr 9 00:47:19.779: INFO: Lookups using dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5231.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5231.svc.cluster.local jessie_udp@dns-test-service-2.dns-5231.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5231.svc.cluster.local] Apr 9 00:47:24.787: INFO: DNS probes using dns-5231/dns-test-31b96e20-07cd-4053-a4b3-06349d60a84d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:47:25.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5231" for this suite. • [SLOW TEST:36.749 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":266,"skipped":4592,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:47:25.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:47:25.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6230" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":267,"skipped":4601,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:47:25.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 9 00:47:25.458: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 9 00:47:25.468: INFO: Number of nodes with available pods: 0 Apr 9 00:47:25.468: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 9 00:47:25.560: INFO: Number of nodes with available pods: 0 Apr 9 00:47:25.560: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:47:26.574: INFO: Number of nodes with available pods: 0 Apr 9 00:47:26.574: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:47:27.567: INFO: Number of nodes with available pods: 0 Apr 9 00:47:27.568: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:47:28.564: INFO: Number of nodes with available pods: 0 Apr 9 00:47:28.564: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:47:29.565: INFO: Number of nodes with available pods: 1 Apr 9 00:47:29.565: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 9 00:47:29.592: INFO: Number of nodes with available pods: 1 Apr 9 00:47:29.592: INFO: Number of running nodes: 0, number of available pods: 1 Apr 9 00:47:30.595: INFO: Number of nodes with available pods: 0 Apr 9 00:47:30.595: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 9 00:47:30.611: INFO: Number of nodes with available pods: 0 Apr 9 00:47:30.611: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:47:31.615: INFO: Number of nodes with available pods: 0 Apr 9 00:47:31.615: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:47:32.615: INFO: Number of nodes with available pods: 0 Apr 9 00:47:32.615: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:47:33.615: INFO: Number of nodes with available pods: 0 Apr 9 00:47:33.615: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:47:34.681: INFO: Number of nodes with available pods: 0 Apr 9 00:47:34.681: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:47:35.615: INFO: Number of nodes with available pods: 0 Apr 9 00:47:35.616: INFO: Node latest-worker is running more than one daemon pod Apr 9 00:47:36.639: INFO: Number of nodes with available pods: 1 Apr 9 00:47:36.639: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-655, will wait for the garbage collector to delete the pods Apr 9 00:47:36.704: INFO: Deleting DaemonSet.extensions daemon-set took: 6.59342ms Apr 9 00:47:37.004: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.260088ms Apr 9 00:47:42.807: INFO: Number of nodes with available pods: 0 Apr 9 00:47:42.807: INFO: Number of running nodes: 0, number of available pods: 0 Apr 9 00:47:42.810: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-655/daemonsets","resourceVersion":"6556380"},"items":null} Apr 9 00:47:42.813: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-655/pods","resourceVersion":"6556380"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:47:42.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-655" for this suite. • [SLOW TEST:17.479 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":268,"skipped":4605,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:47:42.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-e5a51cba-c929-4454-9914-4b7b42602805 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-e5a51cba-c929-4454-9914-4b7b42602805 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:48:49.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4250" for this suite. • [SLOW TEST:66.430 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4609,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:48:49.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 9 00:48:54.451: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:48:54.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4229" for this suite. • [SLOW TEST:5.565 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":270,"skipped":4616,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:48:54.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 9 00:48:54.950: INFO: Waiting up to 5m0s for pod "pod-296dff71-1c05-4029-b4d7-1ab8a54dcc96" in namespace "emptydir-5062" to be "Succeeded or Failed" Apr 9 00:48:54.960: INFO: Pod "pod-296dff71-1c05-4029-b4d7-1ab8a54dcc96": Phase="Pending", Reason="", readiness=false. Elapsed: 10.437392ms Apr 9 00:48:57.005: INFO: Pod "pod-296dff71-1c05-4029-b4d7-1ab8a54dcc96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055814613s Apr 9 00:48:59.010: INFO: Pod "pod-296dff71-1c05-4029-b4d7-1ab8a54dcc96": Phase="Running", Reason="", readiness=true. Elapsed: 4.060391779s Apr 9 00:49:01.013: INFO: Pod "pod-296dff71-1c05-4029-b4d7-1ab8a54dcc96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063077547s STEP: Saw pod success Apr 9 00:49:01.013: INFO: Pod "pod-296dff71-1c05-4029-b4d7-1ab8a54dcc96" satisfied condition "Succeeded or Failed" Apr 9 00:49:01.015: INFO: Trying to get logs from node latest-worker pod pod-296dff71-1c05-4029-b4d7-1ab8a54dcc96 container test-container: STEP: delete the pod Apr 9 00:49:01.138: INFO: Waiting for pod pod-296dff71-1c05-4029-b4d7-1ab8a54dcc96 to disappear Apr 9 00:49:01.209: INFO: Pod pod-296dff71-1c05-4029-b4d7-1ab8a54dcc96 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:49:01.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5062" for this suite. • [SLOW TEST:6.373 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:49:01.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 9 00:49:09.396: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 00:49:09.441: INFO: Pod pod-with-poststart-http-hook still exists Apr 9 00:49:11.442: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 00:49:11.446: INFO: Pod pod-with-poststart-http-hook still exists Apr 9 00:49:13.442: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 00:49:13.446: INFO: Pod pod-with-poststart-http-hook still exists Apr 9 00:49:15.442: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 00:49:15.446: INFO: Pod pod-with-poststart-http-hook still exists Apr 9 00:49:17.442: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 00:49:17.445: INFO: Pod pod-with-poststart-http-hook still exists Apr 9 00:49:19.442: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 00:49:19.446: INFO: Pod pod-with-poststart-http-hook still exists Apr 9 00:49:21.442: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 00:49:21.445: INFO: Pod pod-with-poststart-http-hook still exists Apr 9 00:49:23.442: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 9 00:49:23.446: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:49:23.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3444" for this suite. • [SLOW TEST:22.234 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4662,"failed":0} SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:49:23.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-18e49812-f5fe-4ac9-8ed6-4fd8a2882fe6 STEP: Creating secret with name s-test-opt-upd-42409610-650a-4789-ab3f-c9ad4108b80f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-18e49812-f5fe-4ac9-8ed6-4fd8a2882fe6 STEP: Updating secret s-test-opt-upd-42409610-650a-4789-ab3f-c9ad4108b80f STEP: Creating secret with name s-test-opt-create-5e1a7741-3c67-457f-ba95-7225d805e3eb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:49:31.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1510" for this suite. • [SLOW TEST:8.225 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4666,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:49:31.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-b3402e85-bb8d-473f-9cd9-88eab7a6508a STEP: Creating a pod to test consume secrets Apr 9 00:49:31.780: INFO: Waiting up to 5m0s for pod "pod-secrets-aaf38c79-0685-4625-a14c-e12aa6b1d5dd" in namespace "secrets-2098" to be "Succeeded or Failed" Apr 9 00:49:31.802: INFO: Pod "pod-secrets-aaf38c79-0685-4625-a14c-e12aa6b1d5dd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.890688ms Apr 9 00:49:33.807: INFO: Pod "pod-secrets-aaf38c79-0685-4625-a14c-e12aa6b1d5dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026995839s Apr 9 00:49:35.812: INFO: Pod "pod-secrets-aaf38c79-0685-4625-a14c-e12aa6b1d5dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031468557s STEP: Saw pod success Apr 9 00:49:35.812: INFO: Pod "pod-secrets-aaf38c79-0685-4625-a14c-e12aa6b1d5dd" satisfied condition "Succeeded or Failed" Apr 9 00:49:35.814: INFO: Trying to get logs from node latest-worker pod pod-secrets-aaf38c79-0685-4625-a14c-e12aa6b1d5dd container secret-volume-test: STEP: delete the pod Apr 9 00:49:35.833: INFO: Waiting for pod pod-secrets-aaf38c79-0685-4625-a14c-e12aa6b1d5dd to disappear Apr 9 00:49:35.838: INFO: Pod pod-secrets-aaf38c79-0685-4625-a14c-e12aa6b1d5dd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:49:35.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2098" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4670,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 9 00:49:35.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 9 00:49:40.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1525" for this suite. • [SLOW TEST:5.006 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":275,"skipped":4675,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 9 00:49:40.853: INFO: Running AfterSuite actions on all nodes Apr 9 00:49:40.853: INFO: Running AfterSuite actions on node 1 Apr 9 00:49:40.853: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4416.060 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS