I0417 23:36:45.459739 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0417 23:36:45.459985 7 e2e.go:124] Starting e2e run "332747fc-6e99-44e5-8f74-4a45449f9ce7" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587166604 - Will randomize all specs Will run 275 of 4992 specs Apr 17 23:36:45.512: INFO: >>> kubeConfig: /root/.kube/config Apr 17 23:36:45.518: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 17 23:36:45.538: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 17 23:36:45.575: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 17 23:36:45.575: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 17 23:36:45.575: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 17 23:36:45.588: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 17 23:36:45.588: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 17 23:36:45.588: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 17 23:36:45.589: INFO: kube-apiserver version: v1.17.0 Apr 17 23:36:45.589: INFO: >>> kubeConfig: /root/.kube/config Apr 17 23:36:45.594: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:36:45.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Apr 17 23:36:45.680: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 17 23:36:45.687: INFO: Waiting up to 5m0s for pod "downward-api-47207dbd-d804-4571-a404-d1862ea9fb88" in namespace "downward-api-8581" to be "Succeeded or Failed" Apr 17 23:36:45.693: INFO: Pod "downward-api-47207dbd-d804-4571-a404-d1862ea9fb88": Phase="Pending", Reason="", readiness=false. Elapsed: 5.545771ms Apr 17 23:36:47.697: INFO: Pod "downward-api-47207dbd-d804-4571-a404-d1862ea9fb88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010052785s Apr 17 23:36:49.720: INFO: Pod "downward-api-47207dbd-d804-4571-a404-d1862ea9fb88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033079871s STEP: Saw pod success Apr 17 23:36:49.721: INFO: Pod "downward-api-47207dbd-d804-4571-a404-d1862ea9fb88" satisfied condition "Succeeded or Failed" Apr 17 23:36:49.724: INFO: Trying to get logs from node latest-worker pod downward-api-47207dbd-d804-4571-a404-d1862ea9fb88 container dapi-container: STEP: delete the pod Apr 17 23:36:49.775: INFO: Waiting for pod downward-api-47207dbd-d804-4571-a404-d1862ea9fb88 to disappear Apr 17 23:36:49.789: INFO: Pod downward-api-47207dbd-d804-4571-a404-d1862ea9fb88 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:36:49.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8581" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":32,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:36:49.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 17 23:36:54.398: INFO: Successfully updated pod "pod-update-activedeadlineseconds-214975cf-e859-4966-994e-5a0cbb7ad6ab" Apr 17 23:36:54.398: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-214975cf-e859-4966-994e-5a0cbb7ad6ab" in namespace "pods-737" to be "terminated due to deadline exceeded" Apr 17 23:36:54.421: INFO: Pod "pod-update-activedeadlineseconds-214975cf-e859-4966-994e-5a0cbb7ad6ab": Phase="Running", Reason="", readiness=true. Elapsed: 22.279179ms Apr 17 23:36:56.425: INFO: Pod "pod-update-activedeadlineseconds-214975cf-e859-4966-994e-5a0cbb7ad6ab": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.026339336s Apr 17 23:36:56.425: INFO: Pod "pod-update-activedeadlineseconds-214975cf-e859-4966-994e-5a0cbb7ad6ab" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:36:56.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-737" for this suite. • [SLOW TEST:6.637 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":40,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:36:56.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-gvtk STEP: Creating a pod to test atomic-volume-subpath Apr 17 23:36:56.524: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gvtk" in namespace "subpath-1349" to be "Succeeded or Failed" Apr 17 23:36:56.529: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309914ms Apr 17 23:36:58.563: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038198034s Apr 17 23:37:00.566: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Running", Reason="", readiness=true. Elapsed: 4.041934382s Apr 17 23:37:02.571: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Running", Reason="", readiness=true. Elapsed: 6.045993044s Apr 17 23:37:04.575: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Running", Reason="", readiness=true. Elapsed: 8.050113719s Apr 17 23:37:06.578: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Running", Reason="", readiness=true. Elapsed: 10.05380738s Apr 17 23:37:08.582: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Running", Reason="", readiness=true. Elapsed: 12.057598836s Apr 17 23:37:10.587: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Running", Reason="", readiness=true. Elapsed: 14.06233411s Apr 17 23:37:12.591: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Running", Reason="", readiness=true. Elapsed: 16.066850928s Apr 17 23:37:14.595: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Running", Reason="", readiness=true. Elapsed: 18.070409392s Apr 17 23:37:16.599: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Running", Reason="", readiness=true. Elapsed: 20.074382872s Apr 17 23:37:18.603: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Running", Reason="", readiness=true. Elapsed: 22.078810033s Apr 17 23:37:20.608: INFO: Pod "pod-subpath-test-configmap-gvtk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.083297147s STEP: Saw pod success Apr 17 23:37:20.608: INFO: Pod "pod-subpath-test-configmap-gvtk" satisfied condition "Succeeded or Failed" Apr 17 23:37:20.611: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-gvtk container test-container-subpath-configmap-gvtk: STEP: delete the pod Apr 17 23:37:20.630: INFO: Waiting for pod pod-subpath-test-configmap-gvtk to disappear Apr 17 23:37:20.645: INFO: Pod pod-subpath-test-configmap-gvtk no longer exists STEP: Deleting pod pod-subpath-test-configmap-gvtk Apr 17 23:37:20.646: INFO: Deleting pod "pod-subpath-test-configmap-gvtk" in namespace "subpath-1349" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:37:20.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1349" for this suite. • [SLOW TEST:24.254 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":3,"skipped":43,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:37:20.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 17 23:37:28.836: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 23:37:28.858: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 23:37:30.858: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 23:37:30.863: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 23:37:32.858: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 23:37:32.863: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 23:37:34.858: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 23:37:34.862: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 23:37:36.858: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 23:37:36.863: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 23:37:38.858: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 23:37:38.863: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 23:37:40.858: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 23:37:40.863: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 23:37:42.858: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 23:37:43.100: INFO: Pod pod-with-poststart-http-hook still exists Apr 17 23:37:44.858: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 17 23:37:44.862: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:37:44.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1602" for this suite. • [SLOW TEST:24.182 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":56,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:37:44.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 17 23:37:44.927: INFO: Waiting up to 5m0s for pod "var-expansion-4945767e-ae98-4e3a-8ea0-7d5edce7b96f" in namespace "var-expansion-418" to be "Succeeded or Failed" Apr 17 23:37:44.931: INFO: Pod "var-expansion-4945767e-ae98-4e3a-8ea0-7d5edce7b96f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.671787ms Apr 17 23:37:46.936: INFO: Pod "var-expansion-4945767e-ae98-4e3a-8ea0-7d5edce7b96f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008485773s Apr 17 23:37:48.940: INFO: Pod "var-expansion-4945767e-ae98-4e3a-8ea0-7d5edce7b96f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012643129s STEP: Saw pod success Apr 17 23:37:48.940: INFO: Pod "var-expansion-4945767e-ae98-4e3a-8ea0-7d5edce7b96f" satisfied condition "Succeeded or Failed" Apr 17 23:37:48.943: INFO: Trying to get logs from node latest-worker2 pod var-expansion-4945767e-ae98-4e3a-8ea0-7d5edce7b96f container dapi-container: STEP: delete the pod Apr 17 23:37:48.963: INFO: Waiting for pod var-expansion-4945767e-ae98-4e3a-8ea0-7d5edce7b96f to disappear Apr 17 23:37:48.967: INFO: Pod var-expansion-4945767e-ae98-4e3a-8ea0-7d5edce7b96f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:37:48.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-418" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":70,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:37:48.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 23:37:53.156: INFO: Waiting up to 5m0s for pod "client-envvars-ca141e3e-b5d4-4051-8b06-818c488a7b72" in namespace "pods-716" to be "Succeeded or Failed" Apr 17 23:37:53.161: INFO: Pod "client-envvars-ca141e3e-b5d4-4051-8b06-818c488a7b72": Phase="Pending", Reason="", readiness=false. Elapsed: 5.46199ms Apr 17 23:37:55.165: INFO: Pod "client-envvars-ca141e3e-b5d4-4051-8b06-818c488a7b72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009469543s Apr 17 23:37:57.169: INFO: Pod "client-envvars-ca141e3e-b5d4-4051-8b06-818c488a7b72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013343924s STEP: Saw pod success Apr 17 23:37:57.169: INFO: Pod "client-envvars-ca141e3e-b5d4-4051-8b06-818c488a7b72" satisfied condition "Succeeded or Failed" Apr 17 23:37:57.172: INFO: Trying to get logs from node latest-worker pod client-envvars-ca141e3e-b5d4-4051-8b06-818c488a7b72 container env3cont: STEP: delete the pod Apr 17 23:37:57.206: INFO: Waiting for pod client-envvars-ca141e3e-b5d4-4051-8b06-818c488a7b72 to disappear Apr 17 23:37:57.222: INFO: Pod client-envvars-ca141e3e-b5d4-4051-8b06-818c488a7b72 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:37:57.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-716" for this suite. • [SLOW TEST:8.256 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":92,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:37:57.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-wc2g STEP: Creating a pod to test atomic-volume-subpath Apr 17 23:37:57.320: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wc2g" in namespace "subpath-533" to be "Succeeded or Failed" Apr 17 23:37:57.323: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Pending", Reason="", readiness=false. Elapsed: 3.374198ms Apr 17 23:37:59.332: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01237677s Apr 17 23:38:01.336: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Running", Reason="", readiness=true. Elapsed: 4.016639313s Apr 17 23:38:03.340: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Running", Reason="", readiness=true. Elapsed: 6.020445124s Apr 17 23:38:05.346: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Running", Reason="", readiness=true. Elapsed: 8.026170078s Apr 17 23:38:07.350: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Running", Reason="", readiness=true. Elapsed: 10.029914684s Apr 17 23:38:09.354: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Running", Reason="", readiness=true. Elapsed: 12.034111347s Apr 17 23:38:11.358: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Running", Reason="", readiness=true. Elapsed: 14.038105253s Apr 17 23:38:13.362: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Running", Reason="", readiness=true. Elapsed: 16.0422185s Apr 17 23:38:15.364: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Running", Reason="", readiness=true. Elapsed: 18.044799272s Apr 17 23:38:17.368: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Running", Reason="", readiness=true. Elapsed: 20.048420637s Apr 17 23:38:19.373: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Running", Reason="", readiness=true. Elapsed: 22.052970358s Apr 17 23:38:21.377: INFO: Pod "pod-subpath-test-configmap-wc2g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057273086s STEP: Saw pod success Apr 17 23:38:21.377: INFO: Pod "pod-subpath-test-configmap-wc2g" satisfied condition "Succeeded or Failed" Apr 17 23:38:21.380: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-wc2g container test-container-subpath-configmap-wc2g: STEP: delete the pod Apr 17 23:38:21.429: INFO: Waiting for pod pod-subpath-test-configmap-wc2g to disappear Apr 17 23:38:21.435: INFO: Pod pod-subpath-test-configmap-wc2g no longer exists STEP: Deleting pod pod-subpath-test-configmap-wc2g Apr 17 23:38:21.435: INFO: Deleting pod "pod-subpath-test-configmap-wc2g" in namespace "subpath-533" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:38:21.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-533" for this suite. • [SLOW TEST:24.213 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":7,"skipped":109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:38:21.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 17 23:38:21.499: INFO: Waiting up to 5m0s for pod "pod-1ce17869-bf70-4fde-b3c9-eab05a3bd907" in namespace "emptydir-3696" to be "Succeeded or Failed" Apr 17 23:38:21.507: INFO: Pod "pod-1ce17869-bf70-4fde-b3c9-eab05a3bd907": Phase="Pending", Reason="", readiness=false. Elapsed: 7.63638ms Apr 17 23:38:23.511: INFO: Pod "pod-1ce17869-bf70-4fde-b3c9-eab05a3bd907": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011509471s Apr 17 23:38:25.515: INFO: Pod "pod-1ce17869-bf70-4fde-b3c9-eab05a3bd907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016030408s STEP: Saw pod success Apr 17 23:38:25.515: INFO: Pod "pod-1ce17869-bf70-4fde-b3c9-eab05a3bd907" satisfied condition "Succeeded or Failed" Apr 17 23:38:25.519: INFO: Trying to get logs from node latest-worker pod pod-1ce17869-bf70-4fde-b3c9-eab05a3bd907 container test-container: STEP: delete the pod Apr 17 23:38:25.547: INFO: Waiting for pod pod-1ce17869-bf70-4fde-b3c9-eab05a3bd907 to disappear Apr 17 23:38:25.551: INFO: Pod pod-1ce17869-bf70-4fde-b3c9-eab05a3bd907 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:38:25.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3696" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":143,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:38:25.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 23:38:25.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 17 23:38:26.287: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T23:38:26Z generation:1 name:name1 resourceVersion:8920221 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:dadd7454-b786-4a80-9e5e-55cf56fdbb5c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 17 23:38:36.292: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T23:38:36Z generation:1 name:name2 resourceVersion:8920268 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6fcf67c1-2a78-4d47-bab6-b701a671d40c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 17 23:38:46.300: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T23:38:26Z generation:2 name:name1 resourceVersion:8920297 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:dadd7454-b786-4a80-9e5e-55cf56fdbb5c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 17 23:38:56.306: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T23:38:36Z generation:2 name:name2 resourceVersion:8920327 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6fcf67c1-2a78-4d47-bab6-b701a671d40c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 17 23:39:06.314: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T23:38:26Z generation:2 name:name1 resourceVersion:8920358 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:dadd7454-b786-4a80-9e5e-55cf56fdbb5c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 17 23:39:16.321: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-17T23:38:36Z generation:2 name:name2 resourceVersion:8920388 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6fcf67c1-2a78-4d47-bab6-b701a671d40c] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:39:26.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-328" for this suite. • [SLOW TEST:61.281 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":9,"skipped":155,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:39:26.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 17 23:39:26.890: INFO: >>> kubeConfig: /root/.kube/config Apr 17 23:39:29.815: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:39:40.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3911" for this suite. • [SLOW TEST:13.682 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":10,"skipped":161,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:39:40.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-mwzd STEP: Creating a pod to test atomic-volume-subpath Apr 17 23:39:40.625: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mwzd" in namespace "subpath-7907" to be "Succeeded or Failed" Apr 17 23:39:40.629: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078381ms Apr 17 23:39:42.645: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020185741s Apr 17 23:39:44.650: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Running", Reason="", readiness=true. Elapsed: 4.024664456s Apr 17 23:39:46.654: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Running", Reason="", readiness=true. Elapsed: 6.02926863s Apr 17 23:39:48.658: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Running", Reason="", readiness=true. Elapsed: 8.033505039s Apr 17 23:39:50.663: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Running", Reason="", readiness=true. Elapsed: 10.037877687s Apr 17 23:39:52.667: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Running", Reason="", readiness=true. Elapsed: 12.042044471s Apr 17 23:39:54.671: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Running", Reason="", readiness=true. Elapsed: 14.046329715s Apr 17 23:39:56.675: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Running", Reason="", readiness=true. Elapsed: 16.049926866s Apr 17 23:39:58.679: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Running", Reason="", readiness=true. Elapsed: 18.054345799s Apr 17 23:40:00.684: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Running", Reason="", readiness=true. Elapsed: 20.058826569s Apr 17 23:40:02.691: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Running", Reason="", readiness=true. Elapsed: 22.066295905s Apr 17 23:40:04.696: INFO: Pod "pod-subpath-test-downwardapi-mwzd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070753193s STEP: Saw pod success Apr 17 23:40:04.696: INFO: Pod "pod-subpath-test-downwardapi-mwzd" satisfied condition "Succeeded or Failed" Apr 17 23:40:04.699: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-mwzd container test-container-subpath-downwardapi-mwzd: STEP: delete the pod Apr 17 23:40:04.748: INFO: Waiting for pod pod-subpath-test-downwardapi-mwzd to disappear Apr 17 23:40:04.768: INFO: Pod pod-subpath-test-downwardapi-mwzd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-mwzd Apr 17 23:40:04.768: INFO: Deleting pod "pod-subpath-test-downwardapi-mwzd" in namespace "subpath-7907" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:40:04.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7907" for this suite. • [SLOW TEST:24.256 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":11,"skipped":165,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:40:04.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 17 23:40:09.387: INFO: Successfully updated pod "labelsupdatecfb76802-87fb-4d88-a637-8e04c96a74cc" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:40:11.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1" for this suite. • [SLOW TEST:6.642 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":166,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:40:11.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:40:27.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5805" for this suite. • [SLOW TEST:16.279 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":13,"skipped":166,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:40:27.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:40:38.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8634" for this suite. • [SLOW TEST:11.150 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":14,"skipped":167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:40:38.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 17 23:40:38.923: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 17 23:40:41.743: INFO: stderr: "" Apr 17 23:40:41.743: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:40:41.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3546" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":15,"skipped":217,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:40:41.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 17 23:40:49.055: INFO: 0 pods remaining Apr 17 23:40:49.055: INFO: 0 pods has nil DeletionTimestamp Apr 17 23:40:49.055: INFO: STEP: Gathering metrics W0417 23:40:50.274027 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 17 23:40:50.274: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:40:50.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6892" for this suite. • [SLOW TEST:8.709 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":16,"skipped":235,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:40:50.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-5282 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 17 23:40:51.126: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 17 23:40:51.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 23:40:53.645: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 23:40:55.644: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 23:40:57.645: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 23:40:59.645: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 23:41:01.644: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 23:41:03.645: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 23:41:05.645: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 17 23:41:05.651: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 17 23:41:07.656: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 17 23:41:11.751: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.146:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5282 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 23:41:11.751: INFO: >>> kubeConfig: /root/.kube/config I0417 23:41:11.786217 7 log.go:172] (0xc001bad6b0) (0xc002ac3180) Create stream I0417 23:41:11.786245 7 log.go:172] (0xc001bad6b0) (0xc002ac3180) Stream added, broadcasting: 1 I0417 23:41:11.787917 7 log.go:172] (0xc001bad6b0) Reply frame received for 1 I0417 23:41:11.787961 7 log.go:172] (0xc001bad6b0) (0xc002c28780) Create stream I0417 23:41:11.787976 7 log.go:172] (0xc001bad6b0) (0xc002c28780) Stream added, broadcasting: 3 I0417 23:41:11.788851 7 log.go:172] (0xc001bad6b0) Reply frame received for 3 I0417 23:41:11.788886 7 log.go:172] (0xc001bad6b0) (0xc00155ce60) Create stream I0417 23:41:11.788900 7 log.go:172] (0xc001bad6b0) (0xc00155ce60) Stream added, broadcasting: 5 I0417 23:41:11.789757 7 log.go:172] (0xc001bad6b0) Reply frame received for 5 I0417 23:41:11.870138 7 log.go:172] (0xc001bad6b0) Data frame received for 3 I0417 23:41:11.870191 7 log.go:172] (0xc002c28780) (3) Data frame handling I0417 23:41:11.870225 7 log.go:172] (0xc002c28780) (3) Data frame sent I0417 23:41:11.870237 7 log.go:172] (0xc001bad6b0) Data frame received for 3 I0417 23:41:11.870247 7 log.go:172] (0xc002c28780) (3) Data frame handling I0417 23:41:11.870306 7 log.go:172] (0xc001bad6b0) Data frame received for 5 I0417 23:41:11.870342 7 log.go:172] (0xc00155ce60) (5) Data frame handling I0417 23:41:11.871594 7 log.go:172] (0xc001bad6b0) Data frame received for 1 I0417 23:41:11.871616 7 log.go:172] (0xc002ac3180) (1) Data frame handling I0417 23:41:11.871625 7 log.go:172] (0xc002ac3180) (1) Data frame sent I0417 23:41:11.871638 7 log.go:172] (0xc001bad6b0) (0xc002ac3180) Stream removed, broadcasting: 1 I0417 23:41:11.871651 7 log.go:172] (0xc001bad6b0) Go away received I0417 23:41:11.872136 7 log.go:172] (0xc001bad6b0) (0xc002ac3180) Stream removed, broadcasting: 1 I0417 23:41:11.872160 7 log.go:172] (0xc001bad6b0) (0xc002c28780) Stream removed, broadcasting: 3 I0417 23:41:11.872175 7 log.go:172] (0xc001bad6b0) (0xc00155ce60) Stream removed, broadcasting: 5 Apr 17 23:41:11.872: INFO: Found all expected endpoints: [netserver-0] Apr 17 23:41:11.875: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.99:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5282 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 23:41:11.875: INFO: >>> kubeConfig: /root/.kube/config I0417 23:41:11.906850 7 log.go:172] (0xc00388c2c0) (0xc00155d400) Create stream I0417 23:41:11.906879 7 log.go:172] (0xc00388c2c0) (0xc00155d400) Stream added, broadcasting: 1 I0417 23:41:11.909054 7 log.go:172] (0xc00388c2c0) Reply frame received for 1 I0417 23:41:11.909091 7 log.go:172] (0xc00388c2c0) (0xc001c8c000) Create stream I0417 23:41:11.909106 7 log.go:172] (0xc00388c2c0) (0xc001c8c000) Stream added, broadcasting: 3 I0417 23:41:11.910171 7 log.go:172] (0xc00388c2c0) Reply frame received for 3 I0417 23:41:11.910203 7 log.go:172] (0xc00388c2c0) (0xc002ac3360) Create stream I0417 23:41:11.910214 7 log.go:172] (0xc00388c2c0) (0xc002ac3360) Stream added, broadcasting: 5 I0417 23:41:11.911120 7 log.go:172] (0xc00388c2c0) Reply frame received for 5 I0417 23:41:11.980398 7 log.go:172] (0xc00388c2c0) Data frame received for 3 I0417 23:41:11.980434 7 log.go:172] (0xc001c8c000) (3) Data frame handling I0417 23:41:11.980446 7 log.go:172] (0xc001c8c000) (3) Data frame sent I0417 23:41:11.980453 7 log.go:172] (0xc00388c2c0) Data frame received for 3 I0417 23:41:11.980459 7 log.go:172] (0xc001c8c000) (3) Data frame handling I0417 23:41:11.980655 7 log.go:172] (0xc00388c2c0) Data frame received for 5 I0417 23:41:11.980673 7 log.go:172] (0xc002ac3360) (5) Data frame handling I0417 23:41:11.982053 7 log.go:172] (0xc00388c2c0) Data frame received for 1 I0417 23:41:11.982071 7 log.go:172] (0xc00155d400) (1) Data frame handling I0417 23:41:11.982084 7 log.go:172] (0xc00155d400) (1) Data frame sent I0417 23:41:11.982097 7 log.go:172] (0xc00388c2c0) (0xc00155d400) Stream removed, broadcasting: 1 I0417 23:41:11.982115 7 log.go:172] (0xc00388c2c0) Go away received I0417 23:41:11.982272 7 log.go:172] (0xc00388c2c0) (0xc00155d400) Stream removed, broadcasting: 1 I0417 23:41:11.982303 7 log.go:172] (0xc00388c2c0) (0xc001c8c000) Stream removed, broadcasting: 3 I0417 23:41:11.982326 7 log.go:172] (0xc00388c2c0) (0xc002ac3360) Stream removed, broadcasting: 5 Apr 17 23:41:11.982: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:41:11.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5282" for this suite. • [SLOW TEST:21.529 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":236,"failed":0} SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:41:11.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-8333 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8333 to expose endpoints map[] Apr 17 23:41:12.134: INFO: Get endpoints failed (12.791731ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 17 23:41:13.138: INFO: successfully validated that service endpoint-test2 in namespace services-8333 exposes endpoints map[] (1.016660464s elapsed) STEP: Creating pod pod1 in namespace services-8333 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8333 to expose endpoints map[pod1:[80]] Apr 17 23:41:16.215: INFO: successfully validated that service endpoint-test2 in namespace services-8333 exposes endpoints map[pod1:[80]] (3.068884632s elapsed) STEP: Creating pod pod2 in namespace services-8333 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8333 to expose endpoints map[pod1:[80] pod2:[80]] Apr 17 23:41:20.567: INFO: successfully validated that service endpoint-test2 in namespace services-8333 exposes endpoints map[pod1:[80] pod2:[80]] (4.348269429s elapsed) STEP: Deleting pod pod1 in namespace services-8333 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8333 to expose endpoints map[pod2:[80]] Apr 17 23:41:21.592: INFO: successfully validated that service endpoint-test2 in namespace services-8333 exposes endpoints map[pod2:[80]] (1.019959769s elapsed) STEP: Deleting pod pod2 in namespace services-8333 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8333 to expose endpoints map[] Apr 17 23:41:22.606: INFO: successfully validated that service endpoint-test2 in namespace services-8333 exposes endpoints map[] (1.007982363s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:41:22.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8333" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.683 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":18,"skipped":238,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:41:22.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 17 23:41:26.784: INFO: Pod pod-hostip-2eb106f7-4270-4b2a-ac64-e7bdb10cc7f8 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:41:26.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-194" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":251,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:41:26.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4514, will wait for the garbage collector to delete the pods Apr 17 23:41:32.927: INFO: Deleting Job.batch foo took: 6.34804ms Apr 17 23:41:33.228: INFO: Terminating Job.batch foo pods took: 300.241287ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:42:13.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4514" for this suite. • [SLOW TEST:46.247 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":20,"skipped":270,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:42:13.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-6960 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 17 23:42:13.109: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 17 23:42:13.144: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 23:42:15.239: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 23:42:17.148: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 23:42:19.149: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 23:42:21.148: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 23:42:23.148: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 23:42:25.148: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 23:42:27.150: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 23:42:29.149: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 23:42:31.156: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 17 23:42:31.162: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 17 23:42:35.211: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.150:8080/dial?request=hostname&protocol=http&host=10.244.2.149&port=8080&tries=1'] Namespace:pod-network-test-6960 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 23:42:35.211: INFO: >>> kubeConfig: /root/.kube/config I0417 23:42:35.235615 7 log.go:172] (0xc002a5f1e0) (0xc00194d540) Create stream I0417 23:42:35.235645 7 log.go:172] (0xc002a5f1e0) (0xc00194d540) Stream added, broadcasting: 1 I0417 23:42:35.237462 7 log.go:172] (0xc002a5f1e0) Reply frame received for 1 I0417 23:42:35.237498 7 log.go:172] (0xc002a5f1e0) (0xc00194d680) Create stream I0417 23:42:35.237510 7 log.go:172] (0xc002a5f1e0) (0xc00194d680) Stream added, broadcasting: 3 I0417 23:42:35.238497 7 log.go:172] (0xc002a5f1e0) Reply frame received for 3 I0417 23:42:35.238521 7 log.go:172] (0xc002a5f1e0) (0xc00194d720) Create stream I0417 23:42:35.238530 7 log.go:172] (0xc002a5f1e0) (0xc00194d720) Stream added, broadcasting: 5 I0417 23:42:35.239355 7 log.go:172] (0xc002a5f1e0) Reply frame received for 5 I0417 23:42:35.321479 7 log.go:172] (0xc002a5f1e0) Data frame received for 3 I0417 23:42:35.321516 7 log.go:172] (0xc00194d680) (3) Data frame handling I0417 23:42:35.321536 7 log.go:172] (0xc00194d680) (3) Data frame sent I0417 23:42:35.321886 7 log.go:172] (0xc002a5f1e0) Data frame received for 5 I0417 23:42:35.321923 7 log.go:172] (0xc00194d720) (5) Data frame handling I0417 23:42:35.322003 7 log.go:172] (0xc002a5f1e0) Data frame received for 3 I0417 23:42:35.322025 7 log.go:172] (0xc00194d680) (3) Data frame handling I0417 23:42:35.323677 7 log.go:172] (0xc002a5f1e0) Data frame received for 1 I0417 23:42:35.323751 7 log.go:172] (0xc00194d540) (1) Data frame handling I0417 23:42:35.323825 7 log.go:172] (0xc00194d540) (1) Data frame sent I0417 23:42:35.323859 7 log.go:172] (0xc002a5f1e0) (0xc00194d540) Stream removed, broadcasting: 1 I0417 23:42:35.323890 7 log.go:172] (0xc002a5f1e0) Go away received I0417 23:42:35.324075 7 log.go:172] (0xc002a5f1e0) (0xc00194d540) Stream removed, broadcasting: 1 I0417 23:42:35.324105 7 log.go:172] (0xc002a5f1e0) (0xc00194d680) Stream removed, broadcasting: 3 I0417 23:42:35.324127 7 log.go:172] (0xc002a5f1e0) (0xc00194d720) Stream removed, broadcasting: 5 Apr 17 23:42:35.324: INFO: Waiting for responses: map[] Apr 17 23:42:35.327: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.150:8080/dial?request=hostname&protocol=http&host=10.244.1.104&port=8080&tries=1'] Namespace:pod-network-test-6960 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 17 23:42:35.327: INFO: >>> kubeConfig: /root/.kube/config I0417 23:42:35.359459 7 log.go:172] (0xc002c329a0) (0xc0017d54a0) Create stream I0417 23:42:35.359487 7 log.go:172] (0xc002c329a0) (0xc0017d54a0) Stream added, broadcasting: 1 I0417 23:42:35.361418 7 log.go:172] (0xc002c329a0) Reply frame received for 1 I0417 23:42:35.361459 7 log.go:172] (0xc002c329a0) (0xc001902000) Create stream I0417 23:42:35.361475 7 log.go:172] (0xc002c329a0) (0xc001902000) Stream added, broadcasting: 3 I0417 23:42:35.362324 7 log.go:172] (0xc002c329a0) Reply frame received for 3 I0417 23:42:35.362352 7 log.go:172] (0xc002c329a0) (0xc0017d5540) Create stream I0417 23:42:35.362361 7 log.go:172] (0xc002c329a0) (0xc0017d5540) Stream added, broadcasting: 5 I0417 23:42:35.363097 7 log.go:172] (0xc002c329a0) Reply frame received for 5 I0417 23:42:35.432721 7 log.go:172] (0xc002c329a0) Data frame received for 3 I0417 23:42:35.432758 7 log.go:172] (0xc001902000) (3) Data frame handling I0417 23:42:35.432783 7 log.go:172] (0xc001902000) (3) Data frame sent I0417 23:42:35.433432 7 log.go:172] (0xc002c329a0) Data frame received for 5 I0417 23:42:35.433469 7 log.go:172] (0xc0017d5540) (5) Data frame handling I0417 23:42:35.433501 7 log.go:172] (0xc002c329a0) Data frame received for 3 I0417 23:42:35.433523 7 log.go:172] (0xc001902000) (3) Data frame handling I0417 23:42:35.434875 7 log.go:172] (0xc002c329a0) Data frame received for 1 I0417 23:42:35.434898 7 log.go:172] (0xc0017d54a0) (1) Data frame handling I0417 23:42:35.434917 7 log.go:172] (0xc0017d54a0) (1) Data frame sent I0417 23:42:35.434948 7 log.go:172] (0xc002c329a0) (0xc0017d54a0) Stream removed, broadcasting: 1 I0417 23:42:35.434991 7 log.go:172] (0xc002c329a0) Go away received I0417 23:42:35.435111 7 log.go:172] (0xc002c329a0) (0xc0017d54a0) Stream removed, broadcasting: 1 I0417 23:42:35.435144 7 log.go:172] (0xc002c329a0) (0xc001902000) Stream removed, broadcasting: 3 I0417 23:42:35.435163 7 log.go:172] (0xc002c329a0) (0xc0017d5540) Stream removed, broadcasting: 5 Apr 17 23:42:35.435: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:42:35.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6960" for this suite. • [SLOW TEST:22.404 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:42:35.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6524 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-6524 Apr 17 23:42:35.561: INFO: Found 0 stateful pods, waiting for 1 Apr 17 23:42:45.565: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 17 23:42:45.600: INFO: Deleting all statefulset in ns statefulset-6524 Apr 17 23:42:45.616: INFO: Scaling statefulset ss to 0 Apr 17 23:42:55.680: INFO: Waiting for statefulset status.replicas updated to 0 Apr 17 23:42:55.683: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:42:55.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6524" for this suite. • [SLOW TEST:20.277 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":22,"skipped":294,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:42:55.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1871 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1871 STEP: creating replication controller externalsvc in namespace services-1871 I0417 23:42:55.862249 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1871, replica count: 2 I0417 23:42:58.912652 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 23:43:01.912902 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 23:43:04.913239 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 17 23:43:04.972: INFO: Creating new exec pod Apr 17 23:43:08.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1871 execpod6przh -- /bin/sh -x -c nslookup nodeport-service' Apr 17 23:43:09.232: INFO: stderr: "I0417 23:43:09.127876 64 log.go:172] (0xc000ae8000) (0xc0006b52c0) Create stream\nI0417 23:43:09.127962 64 log.go:172] (0xc000ae8000) (0xc0006b52c0) Stream added, broadcasting: 1\nI0417 23:43:09.131403 64 log.go:172] (0xc000ae8000) Reply frame received for 1\nI0417 23:43:09.131449 64 log.go:172] (0xc000ae8000) (0xc00098a000) Create stream\nI0417 23:43:09.131461 64 log.go:172] (0xc000ae8000) (0xc00098a000) Stream added, broadcasting: 3\nI0417 23:43:09.132502 64 log.go:172] (0xc000ae8000) Reply frame received for 3\nI0417 23:43:09.132535 64 log.go:172] (0xc000ae8000) (0xc0006b5360) Create stream\nI0417 23:43:09.132547 64 log.go:172] (0xc000ae8000) (0xc0006b5360) Stream added, broadcasting: 5\nI0417 23:43:09.133539 64 log.go:172] (0xc000ae8000) Reply frame received for 5\nI0417 23:43:09.219517 64 log.go:172] (0xc000ae8000) Data frame received for 5\nI0417 23:43:09.219543 64 log.go:172] (0xc0006b5360) (5) Data frame handling\nI0417 23:43:09.219558 64 log.go:172] (0xc0006b5360) (5) Data frame sent\n+ nslookup nodeport-service\nI0417 23:43:09.224174 64 log.go:172] (0xc000ae8000) Data frame received for 3\nI0417 23:43:09.224197 64 log.go:172] (0xc00098a000) (3) Data frame handling\nI0417 23:43:09.224216 64 log.go:172] (0xc00098a000) (3) Data frame sent\nI0417 23:43:09.224858 64 log.go:172] (0xc000ae8000) Data frame received for 3\nI0417 23:43:09.224887 64 log.go:172] (0xc00098a000) (3) Data frame handling\nI0417 23:43:09.224913 64 log.go:172] (0xc00098a000) (3) Data frame sent\nI0417 23:43:09.225361 64 log.go:172] (0xc000ae8000) Data frame received for 5\nI0417 23:43:09.225393 64 log.go:172] (0xc000ae8000) Data frame received for 3\nI0417 23:43:09.225434 64 log.go:172] (0xc00098a000) (3) Data frame handling\nI0417 23:43:09.225473 64 log.go:172] (0xc0006b5360) (5) Data frame handling\nI0417 23:43:09.226926 64 log.go:172] (0xc000ae8000) Data frame received for 1\nI0417 23:43:09.226987 64 log.go:172] (0xc0006b52c0) (1) Data frame handling\nI0417 23:43:09.227009 64 log.go:172] (0xc0006b52c0) (1) Data frame sent\nI0417 23:43:09.227023 64 log.go:172] (0xc000ae8000) (0xc0006b52c0) Stream removed, broadcasting: 1\nI0417 23:43:09.227039 64 log.go:172] (0xc000ae8000) Go away received\nI0417 23:43:09.227536 64 log.go:172] (0xc000ae8000) (0xc0006b52c0) Stream removed, broadcasting: 1\nI0417 23:43:09.227562 64 log.go:172] (0xc000ae8000) (0xc00098a000) Stream removed, broadcasting: 3\nI0417 23:43:09.227572 64 log.go:172] (0xc000ae8000) (0xc0006b5360) Stream removed, broadcasting: 5\n" Apr 17 23:43:09.233: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1871.svc.cluster.local\tcanonical name = externalsvc.services-1871.svc.cluster.local.\nName:\texternalsvc.services-1871.svc.cluster.local\nAddress: 10.96.245.72\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1871, will wait for the garbage collector to delete the pods Apr 17 23:43:09.292: INFO: Deleting ReplicationController externalsvc took: 5.952409ms Apr 17 23:43:09.592: INFO: Terminating ReplicationController externalsvc pods took: 300.258265ms Apr 17 23:43:23.113: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:43:23.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1871" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:27.464 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":23,"skipped":301,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:43:23.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 17 23:43:27.808: INFO: Successfully updated pod "annotationupdatefba9583a-69f2-43c9-919d-c4581929765b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:43:29.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6629" for this suite. • [SLOW TEST:6.660 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:43:29.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:43:33.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3920" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":383,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:43:33.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:43:34.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9663" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":26,"skipped":395,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:43:34.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 17 23:43:34.104: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:43:53.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2989" for this suite. • [SLOW TEST:18.999 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":423,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:43:53.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 17 23:43:57.131: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:43:57.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3111" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":434,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:43:57.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 23:43:57.220: INFO: Creating deployment "test-recreate-deployment" Apr 17 23:43:57.223: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 17 23:43:57.252: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 17 23:43:59.276: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 17 23:43:59.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763837, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763837, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763837, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763837, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 23:44:01.282: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 17 23:44:01.290: INFO: Updating deployment test-recreate-deployment Apr 17 23:44:01.290: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 17 23:44:01.782: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9470 /apis/apps/v1/namespaces/deployment-9470/deployments/test-recreate-deployment 4eab1a72-de12-47bd-953e-07b51094eb68 8922098 2 2020-04-17 23:43:57 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033be608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-17 23:44:01 +0000 UTC,LastTransitionTime:2020-04-17 23:44:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-17 23:44:01 +0000 UTC,LastTransitionTime:2020-04-17 23:43:57 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 17 23:44:01.831: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-9470 /apis/apps/v1/namespaces/deployment-9470/replicasets/test-recreate-deployment-5f94c574ff ec813235-e218-4b8e-b68f-0891ae280338 8922097 1 2020-04-17 23:44:01 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 4eab1a72-de12-47bd-953e-07b51094eb68 0xc0033bea17 0xc0033bea18}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033bea88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 17 23:44:01.831: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 17 23:44:01.831: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-9470 /apis/apps/v1/namespaces/deployment-9470/replicasets/test-recreate-deployment-846c7dd955 91bda1ca-6684-497e-9401-f0fccc352a04 8922086 2 2020-04-17 23:43:57 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 4eab1a72-de12-47bd-953e-07b51094eb68 0xc0033beaf7 0xc0033beaf8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033beb68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 17 23:44:01.834: INFO: Pod "test-recreate-deployment-5f94c574ff-tc24d" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-tc24d test-recreate-deployment-5f94c574ff- deployment-9470 /api/v1/namespaces/deployment-9470/pods/test-recreate-deployment-5f94c574ff-tc24d 735ae0a2-b259-4aa0-9a7d-4373f7aebb5e 8922099 0 2020-04-17 23:44:01 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff ec813235-e218-4b8e-b68f-0891ae280338 0xc0033bf057 0xc0033bf058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fnwtz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fnwtz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fnwtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 23:44:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 23:44:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 23:44:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 23:44:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-17 23:44:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:01.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9470" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":29,"skipped":447,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:01.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:01.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1062" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":30,"skipped":461,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:02.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:06.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6540" for this suite. • [SLOW TEST:5.031 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":31,"skipped":463,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:07.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-111488cc-2672-4842-8e07-87a125e68d9d STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-111488cc-2672-4842-8e07-87a125e68d9d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:13.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9711" for this suite. • [SLOW TEST:6.197 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":481,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:13.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 17 23:44:14.275: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 17 23:44:16.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763854, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763854, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763854, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763854, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 23:44:19.328: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 23:44:19.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:20.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9081" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.414 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":33,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:20.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 23:44:20.734: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4987c1e8-5149-4248-8c6a-12016caf9b32" in namespace "projected-9944" to be "Succeeded or Failed" Apr 17 23:44:20.738: INFO: Pod "downwardapi-volume-4987c1e8-5149-4248-8c6a-12016caf9b32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.847215ms Apr 17 23:44:22.742: INFO: Pod "downwardapi-volume-4987c1e8-5149-4248-8c6a-12016caf9b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007967018s Apr 17 23:44:24.747: INFO: Pod "downwardapi-volume-4987c1e8-5149-4248-8c6a-12016caf9b32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012490549s STEP: Saw pod success Apr 17 23:44:24.747: INFO: Pod "downwardapi-volume-4987c1e8-5149-4248-8c6a-12016caf9b32" satisfied condition "Succeeded or Failed" Apr 17 23:44:24.749: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4987c1e8-5149-4248-8c6a-12016caf9b32 container client-container: STEP: delete the pod Apr 17 23:44:24.783: INFO: Waiting for pod downwardapi-volume-4987c1e8-5149-4248-8c6a-12016caf9b32 to disappear Apr 17 23:44:24.798: INFO: Pod downwardapi-volume-4987c1e8-5149-4248-8c6a-12016caf9b32 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:24.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9944" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":525,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:24.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 23:44:24.884: INFO: Waiting up to 5m0s for pod "downwardapi-volume-758cfb46-2c81-448b-bfcf-0b7aaeb7c851" in namespace "downward-api-4656" to be "Succeeded or Failed" Apr 17 23:44:24.888: INFO: Pod "downwardapi-volume-758cfb46-2c81-448b-bfcf-0b7aaeb7c851": Phase="Pending", Reason="", readiness=false. Elapsed: 3.973186ms Apr 17 23:44:26.900: INFO: Pod "downwardapi-volume-758cfb46-2c81-448b-bfcf-0b7aaeb7c851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016175966s Apr 17 23:44:28.905: INFO: Pod "downwardapi-volume-758cfb46-2c81-448b-bfcf-0b7aaeb7c851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020636758s STEP: Saw pod success Apr 17 23:44:28.905: INFO: Pod "downwardapi-volume-758cfb46-2c81-448b-bfcf-0b7aaeb7c851" satisfied condition "Succeeded or Failed" Apr 17 23:44:28.908: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-758cfb46-2c81-448b-bfcf-0b7aaeb7c851 container client-container: STEP: delete the pod Apr 17 23:44:28.925: INFO: Waiting for pod downwardapi-volume-758cfb46-2c81-448b-bfcf-0b7aaeb7c851 to disappear Apr 17 23:44:28.929: INFO: Pod downwardapi-volume-758cfb46-2c81-448b-bfcf-0b7aaeb7c851 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:28.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4656" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":530,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:28.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-8be4ea62-596d-41cf-b1b2-44ded4ce99c3 STEP: Creating a pod to test consume secrets Apr 17 23:44:29.010: INFO: Waiting up to 5m0s for pod "pod-secrets-2a85192e-e1a6-4a4b-bb97-9962ecb49d87" in namespace "secrets-3683" to be "Succeeded or Failed" Apr 17 23:44:29.026: INFO: Pod "pod-secrets-2a85192e-e1a6-4a4b-bb97-9962ecb49d87": Phase="Pending", Reason="", readiness=false. Elapsed: 16.073103ms Apr 17 23:44:31.038: INFO: Pod "pod-secrets-2a85192e-e1a6-4a4b-bb97-9962ecb49d87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028014221s Apr 17 23:44:33.043: INFO: Pod "pod-secrets-2a85192e-e1a6-4a4b-bb97-9962ecb49d87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032721348s STEP: Saw pod success Apr 17 23:44:33.043: INFO: Pod "pod-secrets-2a85192e-e1a6-4a4b-bb97-9962ecb49d87" satisfied condition "Succeeded or Failed" Apr 17 23:44:33.046: INFO: Trying to get logs from node latest-worker pod pod-secrets-2a85192e-e1a6-4a4b-bb97-9962ecb49d87 container secret-volume-test: STEP: delete the pod Apr 17 23:44:33.098: INFO: Waiting for pod pod-secrets-2a85192e-e1a6-4a4b-bb97-9962ecb49d87 to disappear Apr 17 23:44:33.101: INFO: Pod pod-secrets-2a85192e-e1a6-4a4b-bb97-9962ecb49d87 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:33.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3683" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":536,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:33.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 23:44:33.187: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 17 23:44:38.191: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 17 23:44:38.191: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 17 23:44:42.253: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2643 /apis/apps/v1/namespaces/deployment-2643/deployments/test-cleanup-deployment ae7206e1-7766-43f9-8cf7-12ee21d3f3e3 8922601 1 2020-04-17 23:44:38 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003539068 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-17 23:44:38 +0000 UTC,LastTransitionTime:2020-04-17 23:44:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-577c77b589" has successfully progressed.,LastUpdateTime:2020-04-17 23:44:41 +0000 UTC,LastTransitionTime:2020-04-17 23:44:38 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 17 23:44:42.257: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-2643 /apis/apps/v1/namespaces/deployment-2643/replicasets/test-cleanup-deployment-577c77b589 b3ccb902-6aed-4ab0-9d7a-49604d1ac78b 8922590 1 2020-04-17 23:44:38 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment ae7206e1-7766-43f9-8cf7-12ee21d3f3e3 0xc0035394b7 0xc0035394b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003539528 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 17 23:44:42.260: INFO: Pod "test-cleanup-deployment-577c77b589-d2l28" is available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-d2l28 test-cleanup-deployment-577c77b589- deployment-2643 /api/v1/namespaces/deployment-2643/pods/test-cleanup-deployment-577c77b589-d2l28 41a5c66d-de03-460b-82cd-6a535d6011e9 8922589 0 2020-04-17 23:44:38 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 b3ccb902-6aed-4ab0-9d7a-49604d1ac78b 0xc003539927 0xc003539928}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hzqbr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hzqbr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hzqbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 23:44:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 23:44:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 23:44:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-17 23:44:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.115,StartTime:2020-04-17 23:44:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-17 23:44:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://29acd3c8d81a540883f164d85ec1d68f8486458c058a478f57105e161cfdb672,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:42.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2643" for this suite. • [SLOW TEST:9.160 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":37,"skipped":546,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:42.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 23:44:42.351: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0937caf-907d-4a21-97eb-77d1c51caea9" in namespace "projected-5569" to be "Succeeded or Failed" Apr 17 23:44:42.355: INFO: Pod "downwardapi-volume-e0937caf-907d-4a21-97eb-77d1c51caea9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.598392ms Apr 17 23:44:44.359: INFO: Pod "downwardapi-volume-e0937caf-907d-4a21-97eb-77d1c51caea9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007875547s Apr 17 23:44:46.363: INFO: Pod "downwardapi-volume-e0937caf-907d-4a21-97eb-77d1c51caea9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012077237s STEP: Saw pod success Apr 17 23:44:46.364: INFO: Pod "downwardapi-volume-e0937caf-907d-4a21-97eb-77d1c51caea9" satisfied condition "Succeeded or Failed" Apr 17 23:44:46.367: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e0937caf-907d-4a21-97eb-77d1c51caea9 container client-container: STEP: delete the pod Apr 17 23:44:46.398: INFO: Waiting for pod downwardapi-volume-e0937caf-907d-4a21-97eb-77d1c51caea9 to disappear Apr 17 23:44:46.439: INFO: Pod downwardapi-volume-e0937caf-907d-4a21-97eb-77d1c51caea9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:46.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5569" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":553,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:46.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 23:44:46.482: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 17 23:44:48.558: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:49.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4194" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":39,"skipped":565,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:49.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 17 23:44:50.226: INFO: Waiting up to 5m0s for pod "pod-eb7a46c1-4d29-4dc6-a2ac-bf27f6f97bcf" in namespace "emptydir-653" to be "Succeeded or Failed" Apr 17 23:44:50.254: INFO: Pod "pod-eb7a46c1-4d29-4dc6-a2ac-bf27f6f97bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 27.776653ms Apr 17 23:44:52.259: INFO: Pod "pod-eb7a46c1-4d29-4dc6-a2ac-bf27f6f97bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032328024s Apr 17 23:44:54.263: INFO: Pod "pod-eb7a46c1-4d29-4dc6-a2ac-bf27f6f97bcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036237069s STEP: Saw pod success Apr 17 23:44:54.263: INFO: Pod "pod-eb7a46c1-4d29-4dc6-a2ac-bf27f6f97bcf" satisfied condition "Succeeded or Failed" Apr 17 23:44:54.266: INFO: Trying to get logs from node latest-worker pod pod-eb7a46c1-4d29-4dc6-a2ac-bf27f6f97bcf container test-container: STEP: delete the pod Apr 17 23:44:54.300: INFO: Waiting for pod pod-eb7a46c1-4d29-4dc6-a2ac-bf27f6f97bcf to disappear Apr 17 23:44:54.320: INFO: Pod pod-eb7a46c1-4d29-4dc6-a2ac-bf27f6f97bcf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:44:54.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-653" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":565,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:44:54.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 23:44:54.448: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bd2a9e4-ec6a-4be1-9777-430c5284de17" in namespace "downward-api-8787" to be "Succeeded or Failed" Apr 17 23:44:54.469: INFO: Pod "downwardapi-volume-9bd2a9e4-ec6a-4be1-9777-430c5284de17": Phase="Pending", Reason="", readiness=false. Elapsed: 21.015125ms Apr 17 23:44:56.474: INFO: Pod "downwardapi-volume-9bd2a9e4-ec6a-4be1-9777-430c5284de17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025549692s Apr 17 23:44:58.991: INFO: Pod "downwardapi-volume-9bd2a9e4-ec6a-4be1-9777-430c5284de17": Phase="Running", Reason="", readiness=true. Elapsed: 4.542826235s Apr 17 23:45:00.996: INFO: Pod "downwardapi-volume-9bd2a9e4-ec6a-4be1-9777-430c5284de17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.547812521s STEP: Saw pod success Apr 17 23:45:00.996: INFO: Pod "downwardapi-volume-9bd2a9e4-ec6a-4be1-9777-430c5284de17" satisfied condition "Succeeded or Failed" Apr 17 23:45:00.998: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9bd2a9e4-ec6a-4be1-9777-430c5284de17 container client-container: STEP: delete the pod Apr 17 23:45:01.028: INFO: Waiting for pod downwardapi-volume-9bd2a9e4-ec6a-4be1-9777-430c5284de17 to disappear Apr 17 23:45:01.044: INFO: Pod downwardapi-volume-9bd2a9e4-ec6a-4be1-9777-430c5284de17 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:45:01.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8787" for this suite. • [SLOW TEST:6.702 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":586,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:45:01.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 23:45:01.100: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:45:05.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-553" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":593,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:45:05.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 17 23:45:15.284: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 17 23:45:15.320: INFO: Pod pod-with-prestop-http-hook still exists Apr 17 23:45:17.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 17 23:45:17.324: INFO: Pod pod-with-prestop-http-hook still exists Apr 17 23:45:19.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 17 23:45:19.324: INFO: Pod pod-with-prestop-http-hook still exists Apr 17 23:45:21.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 17 23:45:21.325: INFO: Pod pod-with-prestop-http-hook still exists Apr 17 23:45:23.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 17 23:45:23.325: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:45:23.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7389" for this suite. • [SLOW TEST:18.157 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":594,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:45:23.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3674 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3674 I0417 23:45:23.466636 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3674, replica count: 2 I0417 23:45:26.516999 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 23:45:29.517239 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 23:45:29.517: INFO: Creating new exec pod Apr 17 23:45:34.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3674 execpoddnpf5 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 17 23:45:34.769: INFO: stderr: "I0417 23:45:34.671059 86 log.go:172] (0xc000525ce0) (0xc0009d2320) Create stream\nI0417 23:45:34.671114 86 log.go:172] (0xc000525ce0) (0xc0009d2320) Stream added, broadcasting: 1\nI0417 23:45:34.673638 86 log.go:172] (0xc000525ce0) Reply frame received for 1\nI0417 23:45:34.673675 86 log.go:172] (0xc000525ce0) (0xc00033f860) Create stream\nI0417 23:45:34.673686 86 log.go:172] (0xc000525ce0) (0xc00033f860) Stream added, broadcasting: 3\nI0417 23:45:34.674795 86 log.go:172] (0xc000525ce0) Reply frame received for 3\nI0417 23:45:34.674857 86 log.go:172] (0xc000525ce0) (0xc0006854a0) Create stream\nI0417 23:45:34.674876 86 log.go:172] (0xc000525ce0) (0xc0006854a0) Stream added, broadcasting: 5\nI0417 23:45:34.675771 86 log.go:172] (0xc000525ce0) Reply frame received for 5\nI0417 23:45:34.761337 86 log.go:172] (0xc000525ce0) Data frame received for 5\nI0417 23:45:34.761371 86 log.go:172] (0xc0006854a0) (5) Data frame handling\nI0417 23:45:34.761392 86 log.go:172] (0xc0006854a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0417 23:45:34.761679 86 log.go:172] (0xc000525ce0) Data frame received for 5\nI0417 23:45:34.761702 86 log.go:172] (0xc0006854a0) (5) Data frame handling\nI0417 23:45:34.761714 86 log.go:172] (0xc0006854a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0417 23:45:34.762071 86 log.go:172] (0xc000525ce0) Data frame received for 3\nI0417 23:45:34.762086 86 log.go:172] (0xc00033f860) (3) Data frame handling\nI0417 23:45:34.762181 86 log.go:172] (0xc000525ce0) Data frame received for 5\nI0417 23:45:34.762198 86 log.go:172] (0xc0006854a0) (5) Data frame handling\nI0417 23:45:34.764201 86 log.go:172] (0xc000525ce0) Data frame received for 1\nI0417 23:45:34.764226 86 log.go:172] (0xc0009d2320) (1) Data frame handling\nI0417 23:45:34.764274 86 log.go:172] (0xc0009d2320) (1) Data frame sent\nI0417 23:45:34.764311 86 log.go:172] (0xc000525ce0) (0xc0009d2320) Stream removed, broadcasting: 1\nI0417 23:45:34.764532 86 log.go:172] (0xc000525ce0) Go away received\nI0417 23:45:34.764697 86 log.go:172] (0xc000525ce0) (0xc0009d2320) Stream removed, broadcasting: 1\nI0417 23:45:34.764714 86 log.go:172] (0xc000525ce0) (0xc00033f860) Stream removed, broadcasting: 3\nI0417 23:45:34.764729 86 log.go:172] (0xc000525ce0) (0xc0006854a0) Stream removed, broadcasting: 5\n" Apr 17 23:45:34.770: INFO: stdout: "" Apr 17 23:45:34.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3674 execpoddnpf5 -- /bin/sh -x -c nc -zv -t -w 2 10.96.29.119 80' Apr 17 23:45:34.937: INFO: stderr: "I0417 23:45:34.879262 108 log.go:172] (0xc00003bb80) (0xc000a06500) Create stream\nI0417 23:45:34.879308 108 log.go:172] (0xc00003bb80) (0xc000a06500) Stream added, broadcasting: 1\nI0417 23:45:34.881853 108 log.go:172] (0xc00003bb80) Reply frame received for 1\nI0417 23:45:34.881887 108 log.go:172] (0xc00003bb80) (0xc00093e000) Create stream\nI0417 23:45:34.881902 108 log.go:172] (0xc00003bb80) (0xc00093e000) Stream added, broadcasting: 3\nI0417 23:45:34.882734 108 log.go:172] (0xc00003bb80) Reply frame received for 3\nI0417 23:45:34.882760 108 log.go:172] (0xc00003bb80) (0xc00089a000) Create stream\nI0417 23:45:34.882776 108 log.go:172] (0xc00003bb80) (0xc00089a000) Stream added, broadcasting: 5\nI0417 23:45:34.883541 108 log.go:172] (0xc00003bb80) Reply frame received for 5\nI0417 23:45:34.931652 108 log.go:172] (0xc00003bb80) Data frame received for 3\nI0417 23:45:34.931700 108 log.go:172] (0xc00093e000) (3) Data frame handling\nI0417 23:45:34.931756 108 log.go:172] (0xc00003bb80) Data frame received for 5\nI0417 23:45:34.931792 108 log.go:172] (0xc00089a000) (5) Data frame handling\nI0417 23:45:34.931823 108 log.go:172] (0xc00089a000) (5) Data frame sent\nI0417 23:45:34.931840 108 log.go:172] (0xc00003bb80) Data frame received for 5\nI0417 23:45:34.931849 108 log.go:172] (0xc00089a000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.29.119 80\nConnection to 10.96.29.119 80 port [tcp/http] succeeded!\nI0417 23:45:34.933559 108 log.go:172] (0xc00003bb80) Data frame received for 1\nI0417 23:45:34.933590 108 log.go:172] (0xc000a06500) (1) Data frame handling\nI0417 23:45:34.933603 108 log.go:172] (0xc000a06500) (1) Data frame sent\nI0417 23:45:34.933618 108 log.go:172] (0xc00003bb80) (0xc000a06500) Stream removed, broadcasting: 1\nI0417 23:45:34.933645 108 log.go:172] (0xc00003bb80) Go away received\nI0417 23:45:34.933948 108 log.go:172] (0xc00003bb80) (0xc000a06500) Stream removed, broadcasting: 1\nI0417 23:45:34.933965 108 log.go:172] (0xc00003bb80) (0xc00093e000) Stream removed, broadcasting: 3\nI0417 23:45:34.933974 108 log.go:172] (0xc00003bb80) (0xc00089a000) Stream removed, broadcasting: 5\n" Apr 17 23:45:34.937: INFO: stdout: "" Apr 17 23:45:34.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3674 execpoddnpf5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30033' Apr 17 23:45:35.155: INFO: stderr: "I0417 23:45:35.071045 127 log.go:172] (0xc00069c9a0) (0xc0006980a0) Create stream\nI0417 23:45:35.071104 127 log.go:172] (0xc00069c9a0) (0xc0006980a0) Stream added, broadcasting: 1\nI0417 23:45:35.073364 127 log.go:172] (0xc00069c9a0) Reply frame received for 1\nI0417 23:45:35.073392 127 log.go:172] (0xc00069c9a0) (0xc0006b12c0) Create stream\nI0417 23:45:35.073408 127 log.go:172] (0xc00069c9a0) (0xc0006b12c0) Stream added, broadcasting: 3\nI0417 23:45:35.074324 127 log.go:172] (0xc00069c9a0) Reply frame received for 3\nI0417 23:45:35.074368 127 log.go:172] (0xc00069c9a0) (0xc0006b14a0) Create stream\nI0417 23:45:35.074383 127 log.go:172] (0xc00069c9a0) (0xc0006b14a0) Stream added, broadcasting: 5\nI0417 23:45:35.075362 127 log.go:172] (0xc00069c9a0) Reply frame received for 5\nI0417 23:45:35.149328 127 log.go:172] (0xc00069c9a0) Data frame received for 3\nI0417 23:45:35.149406 127 log.go:172] (0xc0006b12c0) (3) Data frame handling\nI0417 23:45:35.149444 127 log.go:172] (0xc00069c9a0) Data frame received for 5\nI0417 23:45:35.149463 127 log.go:172] (0xc0006b14a0) (5) Data frame handling\nI0417 23:45:35.149490 127 log.go:172] (0xc0006b14a0) (5) Data frame sent\nI0417 23:45:35.149509 127 log.go:172] (0xc00069c9a0) Data frame received for 5\nI0417 23:45:35.149525 127 log.go:172] (0xc0006b14a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30033\nConnection to 172.17.0.13 30033 port [tcp/30033] succeeded!\nI0417 23:45:35.150901 127 log.go:172] (0xc00069c9a0) Data frame received for 1\nI0417 23:45:35.150931 127 log.go:172] (0xc0006980a0) (1) Data frame handling\nI0417 23:45:35.150941 127 log.go:172] (0xc0006980a0) (1) Data frame sent\nI0417 23:45:35.150958 127 log.go:172] (0xc00069c9a0) (0xc0006980a0) Stream removed, broadcasting: 1\nI0417 23:45:35.150976 127 log.go:172] (0xc00069c9a0) Go away received\nI0417 23:45:35.151403 127 log.go:172] (0xc00069c9a0) (0xc0006980a0) Stream removed, broadcasting: 1\nI0417 23:45:35.151428 127 log.go:172] (0xc00069c9a0) (0xc0006b12c0) Stream removed, broadcasting: 3\nI0417 23:45:35.151440 127 log.go:172] (0xc00069c9a0) (0xc0006b14a0) Stream removed, broadcasting: 5\n" Apr 17 23:45:35.155: INFO: stdout: "" Apr 17 23:45:35.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3674 execpoddnpf5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30033' Apr 17 23:45:35.364: INFO: stderr: "I0417 23:45:35.286737 149 log.go:172] (0xc0003c98c0) (0xc00055a000) Create stream\nI0417 23:45:35.286806 149 log.go:172] (0xc0003c98c0) (0xc00055a000) Stream added, broadcasting: 1\nI0417 23:45:35.290328 149 log.go:172] (0xc0003c98c0) Reply frame received for 1\nI0417 23:45:35.290384 149 log.go:172] (0xc0003c98c0) (0xc000625220) Create stream\nI0417 23:45:35.290400 149 log.go:172] (0xc0003c98c0) (0xc000625220) Stream added, broadcasting: 3\nI0417 23:45:35.291493 149 log.go:172] (0xc0003c98c0) Reply frame received for 3\nI0417 23:45:35.291538 149 log.go:172] (0xc0003c98c0) (0xc00038e000) Create stream\nI0417 23:45:35.291555 149 log.go:172] (0xc0003c98c0) (0xc00038e000) Stream added, broadcasting: 5\nI0417 23:45:35.292596 149 log.go:172] (0xc0003c98c0) Reply frame received for 5\nI0417 23:45:35.358441 149 log.go:172] (0xc0003c98c0) Data frame received for 3\nI0417 23:45:35.358469 149 log.go:172] (0xc000625220) (3) Data frame handling\nI0417 23:45:35.358490 149 log.go:172] (0xc0003c98c0) Data frame received for 5\nI0417 23:45:35.358498 149 log.go:172] (0xc00038e000) (5) Data frame handling\nI0417 23:45:35.358518 149 log.go:172] (0xc00038e000) (5) Data frame sent\nI0417 23:45:35.358532 149 log.go:172] (0xc0003c98c0) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.12 30033\nConnection to 172.17.0.12 30033 port [tcp/30033] succeeded!\nI0417 23:45:35.358541 149 log.go:172] (0xc00038e000) (5) Data frame handling\nI0417 23:45:35.360082 149 log.go:172] (0xc0003c98c0) Data frame received for 1\nI0417 23:45:35.360105 149 log.go:172] (0xc00055a000) (1) Data frame handling\nI0417 23:45:35.360123 149 log.go:172] (0xc00055a000) (1) Data frame sent\nI0417 23:45:35.360138 149 log.go:172] (0xc0003c98c0) (0xc00055a000) Stream removed, broadcasting: 1\nI0417 23:45:35.360163 149 log.go:172] (0xc0003c98c0) Go away received\nI0417 23:45:35.360557 149 log.go:172] (0xc0003c98c0) (0xc00055a000) Stream removed, broadcasting: 1\nI0417 23:45:35.360578 149 log.go:172] (0xc0003c98c0) (0xc000625220) Stream removed, broadcasting: 3\nI0417 23:45:35.360589 149 log.go:172] (0xc0003c98c0) (0xc00038e000) Stream removed, broadcasting: 5\n" Apr 17 23:45:35.364: INFO: stdout: "" Apr 17 23:45:35.364: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:45:35.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3674" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.088 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":44,"skipped":616,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:45:35.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 17 23:45:35.485: INFO: Waiting up to 5m0s for pod "pod-5752239a-726f-45aa-954d-ef3b0093f820" in namespace "emptydir-5099" to be "Succeeded or Failed" Apr 17 23:45:35.488: INFO: Pod "pod-5752239a-726f-45aa-954d-ef3b0093f820": Phase="Pending", Reason="", readiness=false. Elapsed: 3.35288ms Apr 17 23:45:37.492: INFO: Pod "pod-5752239a-726f-45aa-954d-ef3b0093f820": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007243868s Apr 17 23:45:39.496: INFO: Pod "pod-5752239a-726f-45aa-954d-ef3b0093f820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011706338s STEP: Saw pod success Apr 17 23:45:39.497: INFO: Pod "pod-5752239a-726f-45aa-954d-ef3b0093f820" satisfied condition "Succeeded or Failed" Apr 17 23:45:39.500: INFO: Trying to get logs from node latest-worker2 pod pod-5752239a-726f-45aa-954d-ef3b0093f820 container test-container: STEP: delete the pod Apr 17 23:45:39.537: INFO: Waiting for pod pod-5752239a-726f-45aa-954d-ef3b0093f820 to disappear Apr 17 23:45:39.548: INFO: Pod pod-5752239a-726f-45aa-954d-ef3b0093f820 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:45:39.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5099" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":618,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:45:39.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 17 23:45:39.627: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 17 23:45:39.650: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 17 23:45:39.650: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 17 23:45:39.658: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 17 23:45:39.658: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 17 23:45:39.685: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 17 23:45:39.685: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 17 23:45:47.124: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:45:47.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-349" for this suite. • [SLOW TEST:7.658 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":46,"skipped":632,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:45:47.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 17 23:45:47.329: INFO: Waiting up to 5m0s for pod "downwardapi-volume-690785f1-0996-4e62-ae16-f2030d2fc8f4" in namespace "downward-api-621" to be "Succeeded or Failed" Apr 17 23:45:47.334: INFO: Pod "downwardapi-volume-690785f1-0996-4e62-ae16-f2030d2fc8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.843395ms Apr 17 23:45:49.350: INFO: Pod "downwardapi-volume-690785f1-0996-4e62-ae16-f2030d2fc8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021281187s Apr 17 23:45:51.354: INFO: Pod "downwardapi-volume-690785f1-0996-4e62-ae16-f2030d2fc8f4": Phase="Running", Reason="", readiness=true. Elapsed: 4.025375099s Apr 17 23:45:53.405: INFO: Pod "downwardapi-volume-690785f1-0996-4e62-ae16-f2030d2fc8f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07628946s STEP: Saw pod success Apr 17 23:45:53.405: INFO: Pod "downwardapi-volume-690785f1-0996-4e62-ae16-f2030d2fc8f4" satisfied condition "Succeeded or Failed" Apr 17 23:45:53.502: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-690785f1-0996-4e62-ae16-f2030d2fc8f4 container client-container: STEP: delete the pod Apr 17 23:45:53.808: INFO: Waiting for pod downwardapi-volume-690785f1-0996-4e62-ae16-f2030d2fc8f4 to disappear Apr 17 23:45:53.812: INFO: Pod downwardapi-volume-690785f1-0996-4e62-ae16-f2030d2fc8f4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:45:53.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-621" for this suite. • [SLOW TEST:6.605 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":641,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:45:53.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:45:54.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8505" for this suite. STEP: Destroying namespace "nspatchtest-aa4d7bf7-d2a7-47a9-b6af-941327ec7c29-3826" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":48,"skipped":642,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:45:54.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 17 23:45:54.133: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2211 /api/v1/namespaces/watch-2211/configmaps/e2e-watch-test-resource-version 03ea8c1d-d3ac-48ef-8e09-a878b07c8df5 8923233 0 2020-04-17 23:45:54 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 23:45:54.133: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2211 /api/v1/namespaces/watch-2211/configmaps/e2e-watch-test-resource-version 03ea8c1d-d3ac-48ef-8e09-a878b07c8df5 8923234 0 2020-04-17 23:45:54 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:45:54.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2211" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":49,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:45:54.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 17 23:45:54.240: INFO: Waiting up to 5m0s for pod "pod-ab1ee5b9-72f9-4e1b-818d-191bf9117a4a" in namespace "emptydir-3239" to be "Succeeded or Failed" Apr 17 23:45:54.243: INFO: Pod "pod-ab1ee5b9-72f9-4e1b-818d-191bf9117a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.642455ms Apr 17 23:45:56.333: INFO: Pod "pod-ab1ee5b9-72f9-4e1b-818d-191bf9117a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093192975s Apr 17 23:45:58.337: INFO: Pod "pod-ab1ee5b9-72f9-4e1b-818d-191bf9117a4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09748642s STEP: Saw pod success Apr 17 23:45:58.337: INFO: Pod "pod-ab1ee5b9-72f9-4e1b-818d-191bf9117a4a" satisfied condition "Succeeded or Failed" Apr 17 23:45:58.340: INFO: Trying to get logs from node latest-worker2 pod pod-ab1ee5b9-72f9-4e1b-818d-191bf9117a4a container test-container: STEP: delete the pod Apr 17 23:45:58.407: INFO: Waiting for pod pod-ab1ee5b9-72f9-4e1b-818d-191bf9117a4a to disappear Apr 17 23:45:58.411: INFO: Pod pod-ab1ee5b9-72f9-4e1b-818d-191bf9117a4a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:45:58.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3239" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":678,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:45:58.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6669.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6669.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6669.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6669.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6669.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6669.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 17 23:46:04.542: INFO: DNS probes using dns-6669/dns-test-9ea0dc9c-ad07-4d99-a1d9-4c4e1e815add succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:46:04.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6669" for this suite. • [SLOW TEST:6.170 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":51,"skipped":700,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:46:04.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-2577 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2577 STEP: Deleting pre-stop pod Apr 17 23:46:17.767: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:46:17.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2577" for this suite. • [SLOW TEST:13.233 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":52,"skipped":705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:46:17.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 17 23:46:17.864: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 17 23:46:17.881: INFO: Waiting for terminating namespaces to be deleted... Apr 17 23:46:17.884: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 17 23:46:17.895: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 23:46:17.895: INFO: Container kindnet-cni ready: true, restart count 0 Apr 17 23:46:17.895: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 23:46:17.895: INFO: Container kube-proxy ready: true, restart count 0 Apr 17 23:46:17.895: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 17 23:46:17.901: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 23:46:17.901: INFO: Container kindnet-cni ready: true, restart count 0 Apr 17 23:46:17.901: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 17 23:46:17.901: INFO: Container kube-proxy ready: true, restart count 0 Apr 17 23:46:17.901: INFO: server from prestop-2577 started at 2020-04-17 23:46:04 +0000 UTC (1 container statuses recorded) Apr 17 23:46:17.901: INFO: Container server ready: true, restart count 0 Apr 17 23:46:17.901: INFO: tester from prestop-2577 started at 2020-04-17 23:46:08 +0000 UTC (1 container statuses recorded) Apr 17 23:46:17.901: INFO: Container tester ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-acbc49a8-e804-41a3-aab4-1e7e0d47b913 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-acbc49a8-e804-41a3-aab4-1e7e0d47b913 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-acbc49a8-e804-41a3-aab4-1e7e0d47b913 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:46:34.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9358" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.645 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":53,"skipped":739,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:46:34.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 17 23:46:34.546: INFO: Waiting up to 5m0s for pod "pod-4beb25e0-81a6-414a-8702-a437c842c349" in namespace "emptydir-5578" to be "Succeeded or Failed" Apr 17 23:46:34.550: INFO: Pod "pod-4beb25e0-81a6-414a-8702-a437c842c349": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023242ms Apr 17 23:46:36.554: INFO: Pod "pod-4beb25e0-81a6-414a-8702-a437c842c349": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008351588s Apr 17 23:46:38.559: INFO: Pod "pod-4beb25e0-81a6-414a-8702-a437c842c349": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012891542s STEP: Saw pod success Apr 17 23:46:38.559: INFO: Pod "pod-4beb25e0-81a6-414a-8702-a437c842c349" satisfied condition "Succeeded or Failed" Apr 17 23:46:38.562: INFO: Trying to get logs from node latest-worker2 pod pod-4beb25e0-81a6-414a-8702-a437c842c349 container test-container: STEP: delete the pod Apr 17 23:46:38.581: INFO: Waiting for pod pod-4beb25e0-81a6-414a-8702-a437c842c349 to disappear Apr 17 23:46:38.602: INFO: Pod pod-4beb25e0-81a6-414a-8702-a437c842c349 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:46:38.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5578" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:46:38.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 23:46:39.420: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 23:46:41.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763999, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763999, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763999, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763999, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 23:46:43.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763999, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763999, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763999, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722763999, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 23:46:46.509: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:46:46.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7928" for this suite. STEP: Destroying namespace "webhook-7928-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.163 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":55,"skipped":771,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:46:46.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-ed1c41e4-a5b2-439f-9a4c-5f8bfca798eb in namespace container-probe-6630 Apr 17 23:46:50.831: INFO: Started pod busybox-ed1c41e4-a5b2-439f-9a4c-5f8bfca798eb in namespace container-probe-6630 STEP: checking the pod's current state and verifying that restartCount is present Apr 17 23:46:50.834: INFO: Initial restart count of pod busybox-ed1c41e4-a5b2-439f-9a4c-5f8bfca798eb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:50:50.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6630" for this suite. • [SLOW TEST:244.145 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":772,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:50:50.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-4b782975-4af1-4135-99ec-596e19e41ee1 STEP: Creating a pod to test consume configMaps Apr 17 23:50:51.000: INFO: Waiting up to 5m0s for pod "pod-configmaps-ecaf64e5-da75-4440-910b-9aec86a796b1" in namespace "configmap-4872" to be "Succeeded or Failed" Apr 17 23:50:51.019: INFO: Pod "pod-configmaps-ecaf64e5-da75-4440-910b-9aec86a796b1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.463806ms Apr 17 23:50:53.024: INFO: Pod "pod-configmaps-ecaf64e5-da75-4440-910b-9aec86a796b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023654622s Apr 17 23:50:55.028: INFO: Pod "pod-configmaps-ecaf64e5-da75-4440-910b-9aec86a796b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027925382s STEP: Saw pod success Apr 17 23:50:55.028: INFO: Pod "pod-configmaps-ecaf64e5-da75-4440-910b-9aec86a796b1" satisfied condition "Succeeded or Failed" Apr 17 23:50:55.031: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-ecaf64e5-da75-4440-910b-9aec86a796b1 container configmap-volume-test: STEP: delete the pod Apr 17 23:50:55.060: INFO: Waiting for pod pod-configmaps-ecaf64e5-da75-4440-910b-9aec86a796b1 to disappear Apr 17 23:50:55.076: INFO: Pod pod-configmaps-ecaf64e5-da75-4440-910b-9aec86a796b1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:50:55.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4872" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":785,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:50:55.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 17 23:50:55.173: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 17 23:50:55.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3740' Apr 17 23:50:57.979: INFO: stderr: "" Apr 17 23:50:57.979: INFO: stdout: "service/agnhost-slave created\n" Apr 17 23:50:57.979: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 17 23:50:57.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3740' Apr 17 23:50:58.268: INFO: stderr: "" Apr 17 23:50:58.268: INFO: stdout: "service/agnhost-master created\n" Apr 17 23:50:58.268: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 17 23:50:58.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3740' Apr 17 23:50:58.546: INFO: stderr: "" Apr 17 23:50:58.546: INFO: stdout: "service/frontend created\n" Apr 17 23:50:58.547: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 17 23:50:58.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3740' Apr 17 23:50:58.782: INFO: stderr: "" Apr 17 23:50:58.782: INFO: stdout: "deployment.apps/frontend created\n" Apr 17 23:50:58.782: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 17 23:50:58.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3740' Apr 17 23:50:59.055: INFO: stderr: "" Apr 17 23:50:59.055: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 17 23:50:59.055: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 17 23:50:59.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3740' Apr 17 23:50:59.306: INFO: stderr: "" Apr 17 23:50:59.306: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 17 23:50:59.306: INFO: Waiting for all frontend pods to be Running. Apr 17 23:51:09.357: INFO: Waiting for frontend to serve content. Apr 17 23:51:09.367: INFO: Trying to add a new entry to the guestbook. Apr 17 23:51:09.377: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 17 23:51:09.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3740' Apr 17 23:51:09.530: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 23:51:09.530: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 17 23:51:09.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3740' Apr 17 23:51:09.668: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 23:51:09.668: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 17 23:51:09.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3740' Apr 17 23:51:09.779: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 23:51:09.779: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 17 23:51:09.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3740' Apr 17 23:51:09.887: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 23:51:09.887: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 17 23:51:09.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3740' Apr 17 23:51:09.995: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 23:51:09.995: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 17 23:51:09.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3740' Apr 17 23:51:10.089: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 23:51:10.090: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:51:10.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3740" for this suite. • [SLOW TEST:15.012 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":58,"skipped":787,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:51:10.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 17 23:51:18.795: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:51:18.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7148" for this suite. • [SLOW TEST:8.725 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":798,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:51:18.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-4f93954b-8add-4b9c-91cc-ca16e7613047 STEP: Creating a pod to test consume configMaps Apr 17 23:51:18.917: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ddbcb1c9-92b6-4c23-adfc-22af155f91e7" in namespace "projected-5519" to be "Succeeded or Failed" Apr 17 23:51:18.928: INFO: Pod "pod-projected-configmaps-ddbcb1c9-92b6-4c23-adfc-22af155f91e7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.190062ms Apr 17 23:51:20.932: INFO: Pod "pod-projected-configmaps-ddbcb1c9-92b6-4c23-adfc-22af155f91e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014324813s Apr 17 23:51:22.936: INFO: Pod "pod-projected-configmaps-ddbcb1c9-92b6-4c23-adfc-22af155f91e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018239398s STEP: Saw pod success Apr 17 23:51:22.936: INFO: Pod "pod-projected-configmaps-ddbcb1c9-92b6-4c23-adfc-22af155f91e7" satisfied condition "Succeeded or Failed" Apr 17 23:51:22.939: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-ddbcb1c9-92b6-4c23-adfc-22af155f91e7 container projected-configmap-volume-test: STEP: delete the pod Apr 17 23:51:22.973: INFO: Waiting for pod pod-projected-configmaps-ddbcb1c9-92b6-4c23-adfc-22af155f91e7 to disappear Apr 17 23:51:22.982: INFO: Pod pod-projected-configmaps-ddbcb1c9-92b6-4c23-adfc-22af155f91e7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:51:22.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5519" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":818,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:51:22.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-8824/configmap-test-d074a487-d577-4c28-93df-2e9e13a7a18a STEP: Creating a pod to test consume configMaps Apr 17 23:51:23.110: INFO: Waiting up to 5m0s for pod "pod-configmaps-2821994d-c05e-4ab2-9b7e-7629c9c1c2d9" in namespace "configmap-8824" to be "Succeeded or Failed" Apr 17 23:51:23.128: INFO: Pod "pod-configmaps-2821994d-c05e-4ab2-9b7e-7629c9c1c2d9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.114355ms Apr 17 23:51:25.131: INFO: Pod "pod-configmaps-2821994d-c05e-4ab2-9b7e-7629c9c1c2d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020955436s Apr 17 23:51:27.134: INFO: Pod "pod-configmaps-2821994d-c05e-4ab2-9b7e-7629c9c1c2d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024734576s STEP: Saw pod success Apr 17 23:51:27.135: INFO: Pod "pod-configmaps-2821994d-c05e-4ab2-9b7e-7629c9c1c2d9" satisfied condition "Succeeded or Failed" Apr 17 23:51:27.138: INFO: Trying to get logs from node latest-worker pod pod-configmaps-2821994d-c05e-4ab2-9b7e-7629c9c1c2d9 container env-test: STEP: delete the pod Apr 17 23:51:27.175: INFO: Waiting for pod pod-configmaps-2821994d-c05e-4ab2-9b7e-7629c9c1c2d9 to disappear Apr 17 23:51:27.193: INFO: Pod pod-configmaps-2821994d-c05e-4ab2-9b7e-7629c9c1c2d9 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:51:27.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8824" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":885,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:51:27.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6888 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 17 23:51:27.352: INFO: Found 0 stateful pods, waiting for 3 Apr 17 23:51:37.367: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 17 23:51:37.367: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 17 23:51:37.367: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 17 23:51:47.357: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 17 23:51:47.357: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 17 23:51:47.357: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 17 23:51:47.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6888 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 17 23:51:47.603: INFO: stderr: "I0417 23:51:47.500454 420 log.go:172] (0xc0009b86e0) (0xc0006794a0) Create stream\nI0417 23:51:47.500497 420 log.go:172] (0xc0009b86e0) (0xc0006794a0) Stream added, broadcasting: 1\nI0417 23:51:47.502399 420 log.go:172] (0xc0009b86e0) Reply frame received for 1\nI0417 23:51:47.502461 420 log.go:172] (0xc0009b86e0) (0xc000978000) Create stream\nI0417 23:51:47.502478 420 log.go:172] (0xc0009b86e0) (0xc000978000) Stream added, broadcasting: 3\nI0417 23:51:47.503346 420 log.go:172] (0xc0009b86e0) Reply frame received for 3\nI0417 23:51:47.503394 420 log.go:172] (0xc0009b86e0) (0xc000679540) Create stream\nI0417 23:51:47.503407 420 log.go:172] (0xc0009b86e0) (0xc000679540) Stream added, broadcasting: 5\nI0417 23:51:47.504342 420 log.go:172] (0xc0009b86e0) Reply frame received for 5\nI0417 23:51:47.565274 420 log.go:172] (0xc0009b86e0) Data frame received for 5\nI0417 23:51:47.565332 420 log.go:172] (0xc000679540) (5) Data frame handling\nI0417 23:51:47.565365 420 log.go:172] (0xc000679540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0417 23:51:47.594870 420 log.go:172] (0xc0009b86e0) Data frame received for 5\nI0417 23:51:47.594903 420 log.go:172] (0xc000679540) (5) Data frame handling\nI0417 23:51:47.594934 420 log.go:172] (0xc0009b86e0) Data frame received for 3\nI0417 23:51:47.594962 420 log.go:172] (0xc000978000) (3) Data frame handling\nI0417 23:51:47.594985 420 log.go:172] (0xc000978000) (3) Data frame sent\nI0417 23:51:47.595000 420 log.go:172] (0xc0009b86e0) Data frame received for 3\nI0417 23:51:47.595015 420 log.go:172] (0xc000978000) (3) Data frame handling\nI0417 23:51:47.596916 420 log.go:172] (0xc0009b86e0) Data frame received for 1\nI0417 23:51:47.596933 420 log.go:172] (0xc0006794a0) (1) Data frame handling\nI0417 23:51:47.596939 420 log.go:172] (0xc0006794a0) (1) Data frame sent\nI0417 23:51:47.597336 420 log.go:172] (0xc0009b86e0) (0xc0006794a0) Stream removed, broadcasting: 1\nI0417 23:51:47.597427 420 log.go:172] (0xc0009b86e0) Go away received\nI0417 23:51:47.597970 420 log.go:172] (0xc0009b86e0) (0xc0006794a0) Stream removed, broadcasting: 1\nI0417 23:51:47.598322 420 log.go:172] (0xc0009b86e0) (0xc000978000) Stream removed, broadcasting: 3\nI0417 23:51:47.598366 420 log.go:172] (0xc0009b86e0) (0xc000679540) Stream removed, broadcasting: 5\n" Apr 17 23:51:47.603: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 17 23:51:47.603: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 17 23:51:57.636: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 17 23:52:07.675: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6888 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 17 23:52:07.880: INFO: stderr: "I0417 23:52:07.798585 443 log.go:172] (0xc000b942c0) (0xc000baae60) Create stream\nI0417 23:52:07.798650 443 log.go:172] (0xc000b942c0) (0xc000baae60) Stream added, broadcasting: 1\nI0417 23:52:07.804306 443 log.go:172] (0xc000b942c0) Reply frame received for 1\nI0417 23:52:07.804361 443 log.go:172] (0xc000b942c0) (0xc000555680) Create stream\nI0417 23:52:07.804380 443 log.go:172] (0xc000b942c0) (0xc000555680) Stream added, broadcasting: 3\nI0417 23:52:07.805729 443 log.go:172] (0xc000b942c0) Reply frame received for 3\nI0417 23:52:07.805772 443 log.go:172] (0xc000b942c0) (0xc0003d2aa0) Create stream\nI0417 23:52:07.805793 443 log.go:172] (0xc000b942c0) (0xc0003d2aa0) Stream added, broadcasting: 5\nI0417 23:52:07.806859 443 log.go:172] (0xc000b942c0) Reply frame received for 5\nI0417 23:52:07.873402 443 log.go:172] (0xc000b942c0) Data frame received for 3\nI0417 23:52:07.873436 443 log.go:172] (0xc000555680) (3) Data frame handling\nI0417 23:52:07.873467 443 log.go:172] (0xc000555680) (3) Data frame sent\nI0417 23:52:07.873486 443 log.go:172] (0xc000b942c0) Data frame received for 3\nI0417 23:52:07.873498 443 log.go:172] (0xc000555680) (3) Data frame handling\nI0417 23:52:07.873577 443 log.go:172] (0xc000b942c0) Data frame received for 5\nI0417 23:52:07.873601 443 log.go:172] (0xc0003d2aa0) (5) Data frame handling\nI0417 23:52:07.873620 443 log.go:172] (0xc0003d2aa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0417 23:52:07.873634 443 log.go:172] (0xc000b942c0) Data frame received for 5\nI0417 23:52:07.873678 443 log.go:172] (0xc0003d2aa0) (5) Data frame handling\nI0417 23:52:07.875489 443 log.go:172] (0xc000b942c0) Data frame received for 1\nI0417 23:52:07.875520 443 log.go:172] (0xc000baae60) (1) Data frame handling\nI0417 23:52:07.875536 443 log.go:172] (0xc000baae60) (1) Data frame sent\nI0417 23:52:07.875553 443 log.go:172] (0xc000b942c0) (0xc000baae60) Stream removed, broadcasting: 1\nI0417 23:52:07.875572 443 log.go:172] (0xc000b942c0) Go away received\nI0417 23:52:07.876021 443 log.go:172] (0xc000b942c0) (0xc000baae60) Stream removed, broadcasting: 1\nI0417 23:52:07.876051 443 log.go:172] (0xc000b942c0) (0xc000555680) Stream removed, broadcasting: 3\nI0417 23:52:07.876065 443 log.go:172] (0xc000b942c0) (0xc0003d2aa0) Stream removed, broadcasting: 5\n" Apr 17 23:52:07.881: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 17 23:52:07.881: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 17 23:52:27.904: INFO: Waiting for StatefulSet statefulset-6888/ss2 to complete update Apr 17 23:52:27.904: INFO: Waiting for Pod statefulset-6888/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 17 23:52:37.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6888 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 17 23:52:38.155: INFO: stderr: "I0417 23:52:38.046948 463 log.go:172] (0xc000920dc0) (0xc0006de320) Create stream\nI0417 23:52:38.047040 463 log.go:172] (0xc000920dc0) (0xc0006de320) Stream added, broadcasting: 1\nI0417 23:52:38.050177 463 log.go:172] (0xc000920dc0) Reply frame received for 1\nI0417 23:52:38.050210 463 log.go:172] (0xc000920dc0) (0xc0006de3c0) Create stream\nI0417 23:52:38.050217 463 log.go:172] (0xc000920dc0) (0xc0006de3c0) Stream added, broadcasting: 3\nI0417 23:52:38.051212 463 log.go:172] (0xc000920dc0) Reply frame received for 3\nI0417 23:52:38.051250 463 log.go:172] (0xc000920dc0) (0xc0008779a0) Create stream\nI0417 23:52:38.051267 463 log.go:172] (0xc000920dc0) (0xc0008779a0) Stream added, broadcasting: 5\nI0417 23:52:38.052414 463 log.go:172] (0xc000920dc0) Reply frame received for 5\nI0417 23:52:38.109838 463 log.go:172] (0xc000920dc0) Data frame received for 5\nI0417 23:52:38.109864 463 log.go:172] (0xc0008779a0) (5) Data frame handling\nI0417 23:52:38.109881 463 log.go:172] (0xc0008779a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0417 23:52:38.147922 463 log.go:172] (0xc000920dc0) Data frame received for 3\nI0417 23:52:38.147960 463 log.go:172] (0xc0006de3c0) (3) Data frame handling\nI0417 23:52:38.147977 463 log.go:172] (0xc0006de3c0) (3) Data frame sent\nI0417 23:52:38.148499 463 log.go:172] (0xc000920dc0) Data frame received for 5\nI0417 23:52:38.148514 463 log.go:172] (0xc0008779a0) (5) Data frame handling\nI0417 23:52:38.148856 463 log.go:172] (0xc000920dc0) Data frame received for 3\nI0417 23:52:38.148869 463 log.go:172] (0xc0006de3c0) (3) Data frame handling\nI0417 23:52:38.151322 463 log.go:172] (0xc000920dc0) Data frame received for 1\nI0417 23:52:38.151338 463 log.go:172] (0xc0006de320) (1) Data frame handling\nI0417 23:52:38.151347 463 log.go:172] (0xc0006de320) (1) Data frame sent\nI0417 23:52:38.151358 463 log.go:172] (0xc000920dc0) (0xc0006de320) Stream removed, broadcasting: 1\nI0417 23:52:38.151657 463 log.go:172] (0xc000920dc0) (0xc0006de320) Stream removed, broadcasting: 1\nI0417 23:52:38.151673 463 log.go:172] (0xc000920dc0) (0xc0006de3c0) Stream removed, broadcasting: 3\nI0417 23:52:38.151681 463 log.go:172] (0xc000920dc0) (0xc0008779a0) Stream removed, broadcasting: 5\n" Apr 17 23:52:38.156: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 17 23:52:38.156: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 17 23:52:48.195: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 17 23:52:58.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6888 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 17 23:52:58.453: INFO: stderr: "I0417 23:52:58.377517 483 log.go:172] (0xc000a260b0) (0xc0006f9680) Create stream\nI0417 23:52:58.377573 483 log.go:172] (0xc000a260b0) (0xc0006f9680) Stream added, broadcasting: 1\nI0417 23:52:58.380449 483 log.go:172] (0xc000a260b0) Reply frame received for 1\nI0417 23:52:58.380517 483 log.go:172] (0xc000a260b0) (0xc0005cd680) Create stream\nI0417 23:52:58.380542 483 log.go:172] (0xc000a260b0) (0xc0005cd680) Stream added, broadcasting: 3\nI0417 23:52:58.381948 483 log.go:172] (0xc000a260b0) Reply frame received for 3\nI0417 23:52:58.382003 483 log.go:172] (0xc000a260b0) (0xc0006f9720) Create stream\nI0417 23:52:58.382022 483 log.go:172] (0xc000a260b0) (0xc0006f9720) Stream added, broadcasting: 5\nI0417 23:52:58.383195 483 log.go:172] (0xc000a260b0) Reply frame received for 5\nI0417 23:52:58.446881 483 log.go:172] (0xc000a260b0) Data frame received for 3\nI0417 23:52:58.446918 483 log.go:172] (0xc0005cd680) (3) Data frame handling\nI0417 23:52:58.446943 483 log.go:172] (0xc0005cd680) (3) Data frame sent\nI0417 23:52:58.447119 483 log.go:172] (0xc000a260b0) Data frame received for 5\nI0417 23:52:58.447162 483 log.go:172] (0xc0006f9720) (5) Data frame handling\nI0417 23:52:58.447178 483 log.go:172] (0xc0006f9720) (5) Data frame sent\nI0417 23:52:58.447186 483 log.go:172] (0xc000a260b0) Data frame received for 5\nI0417 23:52:58.447195 483 log.go:172] (0xc0006f9720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0417 23:52:58.447218 483 log.go:172] (0xc000a260b0) Data frame received for 3\nI0417 23:52:58.447232 483 log.go:172] (0xc0005cd680) (3) Data frame handling\nI0417 23:52:58.448805 483 log.go:172] (0xc000a260b0) Data frame received for 1\nI0417 23:52:58.448842 483 log.go:172] (0xc0006f9680) (1) Data frame handling\nI0417 23:52:58.448866 483 log.go:172] (0xc0006f9680) (1) Data frame sent\nI0417 23:52:58.448886 483 log.go:172] (0xc000a260b0) (0xc0006f9680) Stream removed, broadcasting: 1\nI0417 23:52:58.448907 483 log.go:172] (0xc000a260b0) Go away received\nI0417 23:52:58.449433 483 log.go:172] (0xc000a260b0) (0xc0006f9680) Stream removed, broadcasting: 1\nI0417 23:52:58.449459 483 log.go:172] (0xc000a260b0) (0xc0005cd680) Stream removed, broadcasting: 3\nI0417 23:52:58.449469 483 log.go:172] (0xc000a260b0) (0xc0006f9720) Stream removed, broadcasting: 5\n" Apr 17 23:52:58.454: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 17 23:52:58.454: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 17 23:53:08.474: INFO: Waiting for StatefulSet statefulset-6888/ss2 to complete update Apr 17 23:53:08.474: INFO: Waiting for Pod statefulset-6888/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 17 23:53:08.474: INFO: Waiting for Pod statefulset-6888/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 17 23:53:08.474: INFO: Waiting for Pod statefulset-6888/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 17 23:53:18.482: INFO: Waiting for StatefulSet statefulset-6888/ss2 to complete update Apr 17 23:53:18.482: INFO: Waiting for Pod statefulset-6888/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 17 23:53:18.482: INFO: Waiting for Pod statefulset-6888/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 17 23:53:28.483: INFO: Waiting for StatefulSet statefulset-6888/ss2 to complete update Apr 17 23:53:28.483: INFO: Waiting for Pod statefulset-6888/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 17 23:53:38.483: INFO: Deleting all statefulset in ns statefulset-6888 Apr 17 23:53:38.486: INFO: Scaling statefulset ss2 to 0 Apr 17 23:53:58.505: INFO: Waiting for statefulset status.replicas updated to 0 Apr 17 23:53:58.508: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:53:58.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6888" for this suite. • [SLOW TEST:151.341 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":62,"skipped":886,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:53:58.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 17 23:53:58.608: INFO: Waiting up to 5m0s for pod "pod-193d8b0a-f790-4231-a9f5-bcf2c9d07c0a" in namespace "emptydir-6994" to be "Succeeded or Failed" Apr 17 23:53:58.611: INFO: Pod "pod-193d8b0a-f790-4231-a9f5-bcf2c9d07c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.43195ms Apr 17 23:54:00.615: INFO: Pod "pod-193d8b0a-f790-4231-a9f5-bcf2c9d07c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007321488s Apr 17 23:54:02.619: INFO: Pod "pod-193d8b0a-f790-4231-a9f5-bcf2c9d07c0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011483226s STEP: Saw pod success Apr 17 23:54:02.619: INFO: Pod "pod-193d8b0a-f790-4231-a9f5-bcf2c9d07c0a" satisfied condition "Succeeded or Failed" Apr 17 23:54:02.623: INFO: Trying to get logs from node latest-worker pod pod-193d8b0a-f790-4231-a9f5-bcf2c9d07c0a container test-container: STEP: delete the pod Apr 17 23:54:02.650: INFO: Waiting for pod pod-193d8b0a-f790-4231-a9f5-bcf2c9d07c0a to disappear Apr 17 23:54:02.668: INFO: Pod pod-193d8b0a-f790-4231-a9f5-bcf2c9d07c0a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:54:02.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6994" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:54:02.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 23:54:03.166: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 23:54:05.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722764443, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722764443, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722764443, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722764443, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 23:54:08.203: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:54:08.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4706" for this suite. STEP: Destroying namespace "webhook-4706-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.027 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":64,"skipped":933,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:54:08.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:54:12.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7929" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":934,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:54:12.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 17 23:54:12.859: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:54:18.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5563" for this suite. • [SLOW TEST:5.768 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":66,"skipped":935,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:54:18.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-xm4j5 in namespace proxy-5967 I0417 23:54:18.692714 7 runners.go:190] Created replication controller with name: proxy-service-xm4j5, namespace: proxy-5967, replica count: 1 I0417 23:54:19.743131 7 runners.go:190] proxy-service-xm4j5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 23:54:20.743381 7 runners.go:190] proxy-service-xm4j5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 23:54:21.743628 7 runners.go:190] proxy-service-xm4j5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 23:54:22.743827 7 runners.go:190] proxy-service-xm4j5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0417 23:54:23.744009 7 runners.go:190] proxy-service-xm4j5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0417 23:54:24.744249 7 runners.go:190] proxy-service-xm4j5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0417 23:54:25.744520 7 runners.go:190] proxy-service-xm4j5 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 23:54:25.748: INFO: setup took 7.12462999s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 17 23:54:25.754: INFO: (0) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 5.785785ms) Apr 17 23:54:25.754: INFO: (0) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 6.103144ms) Apr 17 23:54:25.755: INFO: (0) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 7.099713ms) Apr 17 23:54:25.755: INFO: (0) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 7.093295ms) Apr 17 23:54:25.755: INFO: (0) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 7.084288ms) Apr 17 23:54:25.755: INFO: (0) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 7.39717ms) Apr 17 23:54:25.757: INFO: (0) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 8.862403ms) Apr 17 23:54:25.759: INFO: (0) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 11.136721ms) Apr 17 23:54:25.759: INFO: (0) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 11.376336ms) Apr 17 23:54:25.760: INFO: (0) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 12.20102ms) Apr 17 23:54:25.762: INFO: (0) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 13.885405ms) Apr 17 23:54:25.766: INFO: (0) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 18.443736ms) Apr 17 23:54:25.766: INFO: (0) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 18.51946ms) Apr 17 23:54:25.766: INFO: (0) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: ... (200; 7.983908ms) Apr 17 23:54:25.776: INFO: (1) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 8.025729ms) Apr 17 23:54:25.776: INFO: (1) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 8.301841ms) Apr 17 23:54:25.776: INFO: (1) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 8.661781ms) Apr 17 23:54:25.777: INFO: (1) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 9.215348ms) Apr 17 23:54:25.778: INFO: (1) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 10.189196ms) Apr 17 23:54:25.781: INFO: (2) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 3.575297ms) Apr 17 23:54:25.782: INFO: (2) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 3.818404ms) Apr 17 23:54:25.782: INFO: (2) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 3.875526ms) Apr 17 23:54:25.782: INFO: (2) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: ... (200; 3.870934ms) Apr 17 23:54:25.782: INFO: (2) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 3.891176ms) Apr 17 23:54:25.782: INFO: (2) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 4.083756ms) Apr 17 23:54:25.782: INFO: (2) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 4.246465ms) Apr 17 23:54:25.783: INFO: (2) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 4.646772ms) Apr 17 23:54:25.783: INFO: (2) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 4.878061ms) Apr 17 23:54:25.783: INFO: (2) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 4.889798ms) Apr 17 23:54:25.783: INFO: (2) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 4.983993ms) Apr 17 23:54:25.783: INFO: (2) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 4.960075ms) Apr 17 23:54:25.783: INFO: (2) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 4.965862ms) Apr 17 23:54:25.786: INFO: (3) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 3.366358ms) Apr 17 23:54:25.787: INFO: (3) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 3.563855ms) Apr 17 23:54:25.787: INFO: (3) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 3.875032ms) Apr 17 23:54:25.787: INFO: (3) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 4.13731ms) Apr 17 23:54:25.787: INFO: (3) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: ... (200; 4.680449ms) Apr 17 23:54:25.788: INFO: (3) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 4.741896ms) Apr 17 23:54:25.788: INFO: (3) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 4.739122ms) Apr 17 23:54:25.788: INFO: (3) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 4.938824ms) Apr 17 23:54:25.788: INFO: (3) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 4.896669ms) Apr 17 23:54:25.788: INFO: (3) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 4.954422ms) Apr 17 23:54:25.791: INFO: (4) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 3.174721ms) Apr 17 23:54:25.792: INFO: (4) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 3.67591ms) Apr 17 23:54:25.792: INFO: (4) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 3.631104ms) Apr 17 23:54:25.792: INFO: (4) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 3.661373ms) Apr 17 23:54:25.792: INFO: (4) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 3.613967ms) Apr 17 23:54:25.792: INFO: (4) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 3.898853ms) Apr 17 23:54:25.792: INFO: (4) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 3.98791ms) Apr 17 23:54:25.792: INFO: (4) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 3.983304ms) Apr 17 23:54:25.792: INFO: (4) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 3.971788ms) Apr 17 23:54:25.792: INFO: (4) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 3.973492ms) Apr 17 23:54:25.792: INFO: (4) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: ... (200; 12.94878ms) Apr 17 23:54:25.807: INFO: (5) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 12.901671ms) Apr 17 23:54:25.807: INFO: (5) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 13.206202ms) Apr 17 23:54:25.807: INFO: (5) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 13.158374ms) Apr 17 23:54:25.807: INFO: (5) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 13.196646ms) Apr 17 23:54:25.807: INFO: (5) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 13.185054ms) Apr 17 23:54:25.807: INFO: (5) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 13.206241ms) Apr 17 23:54:25.807: INFO: (5) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 13.470361ms) Apr 17 23:54:25.807: INFO: (5) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 13.668241ms) Apr 17 23:54:25.809: INFO: (5) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 15.170681ms) Apr 17 23:54:25.812: INFO: (5) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 18.636413ms) Apr 17 23:54:25.819: INFO: (6) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 6.316108ms) Apr 17 23:54:25.821: INFO: (6) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 7.994203ms) Apr 17 23:54:25.821: INFO: (6) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: test (200; 7.853645ms) Apr 17 23:54:25.821: INFO: (6) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 7.801411ms) Apr 17 23:54:25.821: INFO: (6) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 7.965053ms) Apr 17 23:54:25.821: INFO: (6) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 8.466292ms) Apr 17 23:54:25.821: INFO: (6) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 8.348073ms) Apr 17 23:54:25.821: INFO: (6) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 8.340933ms) Apr 17 23:54:25.821: INFO: (6) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 8.330301ms) Apr 17 23:54:25.822: INFO: (6) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 8.995767ms) Apr 17 23:54:25.825: INFO: (6) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 12.993395ms) Apr 17 23:54:25.825: INFO: (6) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 12.538137ms) Apr 17 23:54:25.831: INFO: (7) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 5.290577ms) Apr 17 23:54:25.831: INFO: (7) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 5.405657ms) Apr 17 23:54:25.832: INFO: (7) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 6.662126ms) Apr 17 23:54:25.832: INFO: (7) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 6.978909ms) Apr 17 23:54:25.833: INFO: (7) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 7.419038ms) Apr 17 23:54:25.834: INFO: (7) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 7.968184ms) Apr 17 23:54:25.834: INFO: (7) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 8.156313ms) Apr 17 23:54:25.834: INFO: (7) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 8.18412ms) Apr 17 23:54:25.834: INFO: (7) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 8.189148ms) Apr 17 23:54:25.834: INFO: (7) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 8.434981ms) Apr 17 23:54:25.834: INFO: (7) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 8.700528ms) Apr 17 23:54:25.834: INFO: (7) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 8.673296ms) Apr 17 23:54:25.842: INFO: (7) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 16.514457ms) Apr 17 23:54:25.842: INFO: (7) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 16.555583ms) Apr 17 23:54:25.842: INFO: (7) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: ... (200; 7.038424ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 6.989406ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 6.966886ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 6.988776ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 7.08511ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 7.062102ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 7.191883ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 7.128769ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 7.22892ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 7.222951ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 7.319254ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 7.332224ms) Apr 17 23:54:25.859: INFO: (8) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: test (200; 3.725032ms) Apr 17 23:54:25.863: INFO: (9) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: test<... (200; 3.786811ms) Apr 17 23:54:25.863: INFO: (9) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 3.76979ms) Apr 17 23:54:25.863: INFO: (9) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 3.786148ms) Apr 17 23:54:25.863: INFO: (9) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 4.019936ms) Apr 17 23:54:25.864: INFO: (9) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 4.54044ms) Apr 17 23:54:25.864: INFO: (9) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 4.57402ms) Apr 17 23:54:25.864: INFO: (9) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 4.679081ms) Apr 17 23:54:25.866: INFO: (10) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 2.524191ms) Apr 17 23:54:25.867: INFO: (10) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: ... (200; 8.145271ms) Apr 17 23:54:25.872: INFO: (10) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 8.141687ms) Apr 17 23:54:25.872: INFO: (10) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 8.102654ms) Apr 17 23:54:25.872: INFO: (10) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 8.186453ms) Apr 17 23:54:25.872: INFO: (10) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 8.147228ms) Apr 17 23:54:25.872: INFO: (10) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 8.177143ms) Apr 17 23:54:25.872: INFO: (10) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 8.521097ms) Apr 17 23:54:25.872: INFO: (10) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 8.554143ms) Apr 17 23:54:25.872: INFO: (10) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 8.649025ms) Apr 17 23:54:25.872: INFO: (10) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 8.662371ms) Apr 17 23:54:25.873: INFO: (10) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 8.724989ms) Apr 17 23:54:25.873: INFO: (10) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 9.039715ms) Apr 17 23:54:25.873: INFO: (10) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 9.504734ms) Apr 17 23:54:25.874: INFO: (10) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 10.172894ms) Apr 17 23:54:25.878: INFO: (11) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 4.023533ms) Apr 17 23:54:25.878: INFO: (11) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 4.154842ms) Apr 17 23:54:25.878: INFO: (11) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 4.408879ms) Apr 17 23:54:25.878: INFO: (11) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 4.487889ms) Apr 17 23:54:25.879: INFO: (11) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 4.626783ms) Apr 17 23:54:25.879: INFO: (11) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 4.628195ms) Apr 17 23:54:25.879: INFO: (11) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 4.661891ms) Apr 17 23:54:25.879: INFO: (11) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: test<... (200; 4.751369ms) Apr 17 23:54:25.879: INFO: (11) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 5.173396ms) Apr 17 23:54:25.879: INFO: (11) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 5.417396ms) Apr 17 23:54:25.879: INFO: (11) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 5.385642ms) Apr 17 23:54:25.880: INFO: (11) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 5.440407ms) Apr 17 23:54:25.880: INFO: (11) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 5.417086ms) Apr 17 23:54:25.880: INFO: (11) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 5.565488ms) Apr 17 23:54:25.882: INFO: (12) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 2.351456ms) Apr 17 23:54:25.883: INFO: (12) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 3.294242ms) Apr 17 23:54:25.883: INFO: (12) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 3.316458ms) Apr 17 23:54:25.883: INFO: (12) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 3.690014ms) Apr 17 23:54:25.883: INFO: (12) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 3.659836ms) Apr 17 23:54:25.883: INFO: (12) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 3.634942ms) Apr 17 23:54:25.883: INFO: (12) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 3.712735ms) Apr 17 23:54:25.883: INFO: (12) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 3.613492ms) Apr 17 23:54:25.883: INFO: (12) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: test (200; 4.067284ms) Apr 17 23:54:25.885: INFO: (12) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 5.113028ms) Apr 17 23:54:25.885: INFO: (12) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 5.111903ms) Apr 17 23:54:25.885: INFO: (12) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 5.1481ms) Apr 17 23:54:25.885: INFO: (12) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 5.497356ms) Apr 17 23:54:25.885: INFO: (12) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 5.514644ms) Apr 17 23:54:25.888: INFO: (13) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 2.321065ms) Apr 17 23:54:25.888: INFO: (13) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 2.811423ms) Apr 17 23:54:25.890: INFO: (13) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 4.975221ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 5.229654ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 5.187836ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 5.206174ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 5.318831ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 5.395936ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 5.39927ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 5.426212ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 5.463967ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 5.756371ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 5.714054ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 5.76933ms) Apr 17 23:54:25.891: INFO: (13) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: test (200; 4.578408ms) Apr 17 23:54:25.896: INFO: (14) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: test<... (200; 4.634023ms) Apr 17 23:54:25.896: INFO: (14) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 4.588604ms) Apr 17 23:54:25.896: INFO: (14) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 4.69067ms) Apr 17 23:54:25.896: INFO: (14) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 4.926099ms) Apr 17 23:54:25.898: INFO: (14) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 7.278585ms) Apr 17 23:54:25.898: INFO: (14) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 7.331756ms) Apr 17 23:54:25.899: INFO: (14) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 7.320286ms) Apr 17 23:54:25.899: INFO: (14) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 7.312881ms) Apr 17 23:54:25.899: INFO: (14) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 7.393035ms) Apr 17 23:54:25.899: INFO: (14) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 7.376012ms) Apr 17 23:54:25.899: INFO: (14) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 7.35279ms) Apr 17 23:54:25.899: INFO: (14) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 7.405493ms) Apr 17 23:54:25.899: INFO: (14) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 7.393645ms) Apr 17 23:54:25.901: INFO: (15) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 2.402631ms) Apr 17 23:54:25.901: INFO: (15) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 2.466056ms) Apr 17 23:54:25.902: INFO: (15) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 2.794114ms) Apr 17 23:54:25.902: INFO: (15) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: test (200; 4.394228ms) Apr 17 23:54:25.903: INFO: (15) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 4.376379ms) Apr 17 23:54:25.903: INFO: (15) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 4.447147ms) Apr 17 23:54:25.903: INFO: (15) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 4.38118ms) Apr 17 23:54:25.903: INFO: (15) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 4.46409ms) Apr 17 23:54:25.903: INFO: (15) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 4.478253ms) Apr 17 23:54:25.904: INFO: (15) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 5.654271ms) Apr 17 23:54:25.904: INFO: (15) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 5.70199ms) Apr 17 23:54:25.904: INFO: (15) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 5.672487ms) Apr 17 23:54:25.904: INFO: (15) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 5.6391ms) Apr 17 23:54:25.904: INFO: (15) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 5.643524ms) Apr 17 23:54:25.906: INFO: (16) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 1.891888ms) Apr 17 23:54:25.908: INFO: (16) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 3.068844ms) Apr 17 23:54:25.908: INFO: (16) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 3.125259ms) Apr 17 23:54:25.908: INFO: (16) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 3.645423ms) Apr 17 23:54:25.908: INFO: (16) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 3.661149ms) Apr 17 23:54:25.908: INFO: (16) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 3.825281ms) Apr 17 23:54:25.908: INFO: (16) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 3.86884ms) Apr 17 23:54:25.908: INFO: (16) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 3.80684ms) Apr 17 23:54:25.908: INFO: (16) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 3.874501ms) Apr 17 23:54:25.908: INFO: (16) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 3.911215ms) Apr 17 23:54:25.909: INFO: (16) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: ... (200; 3.271421ms) Apr 17 23:54:25.912: INFO: (17) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: test (200; 3.325913ms) Apr 17 23:54:25.912: INFO: (17) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 3.505488ms) Apr 17 23:54:25.912: INFO: (17) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 3.464768ms) Apr 17 23:54:25.912: INFO: (17) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname2/proxy/: bar (200; 3.555539ms) Apr 17 23:54:25.913: INFO: (17) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 3.875407ms) Apr 17 23:54:25.913: INFO: (17) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 3.928331ms) Apr 17 23:54:25.913: INFO: (17) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 3.910509ms) Apr 17 23:54:25.913: INFO: (17) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 3.993955ms) Apr 17 23:54:25.913: INFO: (17) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 4.028965ms) Apr 17 23:54:25.915: INFO: (18) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 2.07906ms) Apr 17 23:54:25.916: INFO: (18) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 2.580842ms) Apr 17 23:54:25.916: INFO: (18) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 2.702244ms) Apr 17 23:54:25.916: INFO: (18) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 2.740692ms) Apr 17 23:54:25.916: INFO: (18) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 3.134413ms) Apr 17 23:54:25.916: INFO: (18) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:1080/proxy/: ... (200; 3.126157ms) Apr 17 23:54:25.916: INFO: (18) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 3.24796ms) Apr 17 23:54:25.916: INFO: (18) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 3.297766ms) Apr 17 23:54:25.916: INFO: (18) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 3.308956ms) Apr 17 23:54:25.916: INFO: (18) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 3.2671ms) Apr 17 23:54:25.916: INFO: (18) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: ... (200; 3.366844ms) Apr 17 23:54:25.921: INFO: (19) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:1080/proxy/: test<... (200; 3.428874ms) Apr 17 23:54:25.922: INFO: (19) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d:162/proxy/: bar (200; 3.971138ms) Apr 17 23:54:25.922: INFO: (19) /api/v1/namespaces/proxy-5967/pods/http:proxy-service-xm4j5-hdq2d:160/proxy/: foo (200; 4.012417ms) Apr 17 23:54:25.922: INFO: (19) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname2/proxy/: bar (200; 4.019612ms) Apr 17 23:54:25.922: INFO: (19) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname2/proxy/: tls qux (200; 4.008064ms) Apr 17 23:54:25.922: INFO: (19) /api/v1/namespaces/proxy-5967/services/proxy-service-xm4j5:portname1/proxy/: foo (200; 4.056213ms) Apr 17 23:54:25.922: INFO: (19) /api/v1/namespaces/proxy-5967/services/https:proxy-service-xm4j5:tlsportname1/proxy/: tls baz (200; 4.049515ms) Apr 17 23:54:25.922: INFO: (19) /api/v1/namespaces/proxy-5967/pods/proxy-service-xm4j5-hdq2d/proxy/: test (200; 4.114079ms) Apr 17 23:54:25.922: INFO: (19) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:462/proxy/: tls qux (200; 4.135662ms) Apr 17 23:54:25.922: INFO: (19) /api/v1/namespaces/proxy-5967/services/http:proxy-service-xm4j5:portname1/proxy/: foo (200; 4.162305ms) Apr 17 23:54:25.922: INFO: (19) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:460/proxy/: tls baz (200; 4.091674ms) Apr 17 23:54:25.922: INFO: (19) /api/v1/namespaces/proxy-5967/pods/https:proxy-service-xm4j5-hdq2d:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 17 23:54:32.869: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:54:48.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6771" for this suite. • [SLOW TEST:15.849 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":68,"skipped":973,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:54:48.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 17 23:54:48.713: INFO: Waiting up to 5m0s for pod "pod-3629cdcb-6238-42d0-a05c-4a5f8c415365" in namespace "emptydir-8878" to be "Succeeded or Failed" Apr 17 23:54:48.735: INFO: Pod "pod-3629cdcb-6238-42d0-a05c-4a5f8c415365": Phase="Pending", Reason="", readiness=false. Elapsed: 21.744221ms Apr 17 23:54:50.738: INFO: Pod "pod-3629cdcb-6238-42d0-a05c-4a5f8c415365": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024988609s Apr 17 23:54:52.742: INFO: Pod "pod-3629cdcb-6238-42d0-a05c-4a5f8c415365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029221017s STEP: Saw pod success Apr 17 23:54:52.742: INFO: Pod "pod-3629cdcb-6238-42d0-a05c-4a5f8c415365" satisfied condition "Succeeded or Failed" Apr 17 23:54:52.745: INFO: Trying to get logs from node latest-worker pod pod-3629cdcb-6238-42d0-a05c-4a5f8c415365 container test-container: STEP: delete the pod Apr 17 23:54:52.872: INFO: Waiting for pod pod-3629cdcb-6238-42d0-a05c-4a5f8c415365 to disappear Apr 17 23:54:52.900: INFO: Pod pod-3629cdcb-6238-42d0-a05c-4a5f8c415365 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:54:52.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8878" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":975,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:54:52.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 23:54:53.031: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.423188ms) Apr 17 23:54:53.035: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.437214ms) Apr 17 23:54:53.038: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.781586ms) Apr 17 23:54:53.041: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.049594ms) Apr 17 23:54:53.044: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.007063ms) Apr 17 23:54:53.047: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.14411ms) Apr 17 23:54:53.050: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.710079ms) Apr 17 23:54:53.052: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.567236ms) Apr 17 23:54:53.055: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.743203ms) Apr 17 23:54:53.058: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.633661ms) Apr 17 23:54:53.060: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.406742ms) Apr 17 23:54:53.063: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.674305ms) Apr 17 23:54:53.065: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.471416ms) Apr 17 23:54:53.068: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.682373ms) Apr 17 23:54:53.071: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.664826ms) Apr 17 23:54:53.074: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.87131ms) Apr 17 23:54:53.077: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.127739ms) Apr 17 23:54:53.080: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.849021ms) Apr 17 23:54:53.094: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 14.647341ms) Apr 17 23:54:53.098: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.183311ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:54:53.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1074" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":70,"skipped":999,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:54:53.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:54:53.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2573" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":71,"skipped":999,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:54:53.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 23:54:53.280: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 17 23:54:55.197: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2954 create -f -' Apr 17 23:54:58.048: INFO: stderr: "" Apr 17 23:54:58.048: INFO: stdout: "e2e-test-crd-publish-openapi-4248-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 17 23:54:58.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2954 delete e2e-test-crd-publish-openapi-4248-crds test-cr' Apr 17 23:54:58.153: INFO: stderr: "" Apr 17 23:54:58.153: INFO: stdout: "e2e-test-crd-publish-openapi-4248-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 17 23:54:58.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2954 apply -f -' Apr 17 23:54:58.411: INFO: stderr: "" Apr 17 23:54:58.411: INFO: stdout: "e2e-test-crd-publish-openapi-4248-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 17 23:54:58.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2954 delete e2e-test-crd-publish-openapi-4248-crds test-cr' Apr 17 23:54:58.508: INFO: stderr: "" Apr 17 23:54:58.508: INFO: stdout: "e2e-test-crd-publish-openapi-4248-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 17 23:54:58.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4248-crds' Apr 17 23:54:58.748: INFO: stderr: "" Apr 17 23:54:58.748: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4248-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:55:00.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2954" for this suite. • [SLOW TEST:7.463 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":72,"skipped":1007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:55:00.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 23:55:00.790: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:55:01.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7289" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":73,"skipped":1032,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:55:01.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 17 23:55:01.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-520' Apr 17 23:55:01.711: INFO: stderr: "" Apr 17 23:55:01.711: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 17 23:55:01.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-520' Apr 17 23:55:01.836: INFO: stderr: "" Apr 17 23:55:01.836: INFO: stdout: "update-demo-nautilus-hw8z2 update-demo-nautilus-w68j2 " Apr 17 23:55:01.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hw8z2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:01.940: INFO: stderr: "" Apr 17 23:55:01.940: INFO: stdout: "" Apr 17 23:55:01.940: INFO: update-demo-nautilus-hw8z2 is created but not running Apr 17 23:55:06.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-520' Apr 17 23:55:07.038: INFO: stderr: "" Apr 17 23:55:07.038: INFO: stdout: "update-demo-nautilus-hw8z2 update-demo-nautilus-w68j2 " Apr 17 23:55:07.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hw8z2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:07.129: INFO: stderr: "" Apr 17 23:55:07.130: INFO: stdout: "true" Apr 17 23:55:07.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hw8z2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:07.220: INFO: stderr: "" Apr 17 23:55:07.220: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 23:55:07.220: INFO: validating pod update-demo-nautilus-hw8z2 Apr 17 23:55:07.224: INFO: got data: { "image": "nautilus.jpg" } Apr 17 23:55:07.224: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 23:55:07.224: INFO: update-demo-nautilus-hw8z2 is verified up and running Apr 17 23:55:07.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w68j2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:07.327: INFO: stderr: "" Apr 17 23:55:07.328: INFO: stdout: "true" Apr 17 23:55:07.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w68j2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:07.421: INFO: stderr: "" Apr 17 23:55:07.421: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 23:55:07.421: INFO: validating pod update-demo-nautilus-w68j2 Apr 17 23:55:07.425: INFO: got data: { "image": "nautilus.jpg" } Apr 17 23:55:07.425: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 23:55:07.425: INFO: update-demo-nautilus-w68j2 is verified up and running STEP: scaling down the replication controller Apr 17 23:55:07.427: INFO: scanned /root for discovery docs: Apr 17 23:55:07.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-520' Apr 17 23:55:08.638: INFO: stderr: "" Apr 17 23:55:08.638: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 17 23:55:08.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-520' Apr 17 23:55:08.743: INFO: stderr: "" Apr 17 23:55:08.743: INFO: stdout: "update-demo-nautilus-hw8z2 update-demo-nautilus-w68j2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 17 23:55:13.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-520' Apr 17 23:55:13.832: INFO: stderr: "" Apr 17 23:55:13.832: INFO: stdout: "update-demo-nautilus-w68j2 " Apr 17 23:55:13.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w68j2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:13.924: INFO: stderr: "" Apr 17 23:55:13.924: INFO: stdout: "true" Apr 17 23:55:13.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w68j2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:14.031: INFO: stderr: "" Apr 17 23:55:14.031: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 23:55:14.031: INFO: validating pod update-demo-nautilus-w68j2 Apr 17 23:55:14.034: INFO: got data: { "image": "nautilus.jpg" } Apr 17 23:55:14.034: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 23:55:14.034: INFO: update-demo-nautilus-w68j2 is verified up and running STEP: scaling up the replication controller Apr 17 23:55:14.036: INFO: scanned /root for discovery docs: Apr 17 23:55:14.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-520' Apr 17 23:55:15.195: INFO: stderr: "" Apr 17 23:55:15.195: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 17 23:55:15.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-520' Apr 17 23:55:15.300: INFO: stderr: "" Apr 17 23:55:15.300: INFO: stdout: "update-demo-nautilus-b769x update-demo-nautilus-w68j2 " Apr 17 23:55:15.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b769x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:15.388: INFO: stderr: "" Apr 17 23:55:15.388: INFO: stdout: "" Apr 17 23:55:15.388: INFO: update-demo-nautilus-b769x is created but not running Apr 17 23:55:20.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-520' Apr 17 23:55:20.500: INFO: stderr: "" Apr 17 23:55:20.500: INFO: stdout: "update-demo-nautilus-b769x update-demo-nautilus-w68j2 " Apr 17 23:55:20.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b769x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:20.595: INFO: stderr: "" Apr 17 23:55:20.595: INFO: stdout: "true" Apr 17 23:55:20.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b769x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:20.682: INFO: stderr: "" Apr 17 23:55:20.682: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 23:55:20.682: INFO: validating pod update-demo-nautilus-b769x Apr 17 23:55:20.686: INFO: got data: { "image": "nautilus.jpg" } Apr 17 23:55:20.686: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 23:55:20.686: INFO: update-demo-nautilus-b769x is verified up and running Apr 17 23:55:20.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w68j2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:20.813: INFO: stderr: "" Apr 17 23:55:20.813: INFO: stdout: "true" Apr 17 23:55:20.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w68j2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-520' Apr 17 23:55:20.898: INFO: stderr: "" Apr 17 23:55:20.898: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 17 23:55:20.898: INFO: validating pod update-demo-nautilus-w68j2 Apr 17 23:55:20.901: INFO: got data: { "image": "nautilus.jpg" } Apr 17 23:55:20.901: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 23:55:20.901: INFO: update-demo-nautilus-w68j2 is verified up and running STEP: using delete to clean up resources Apr 17 23:55:20.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-520' Apr 17 23:55:21.007: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 23:55:21.008: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 17 23:55:21.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-520' Apr 17 23:55:21.098: INFO: stderr: "No resources found in kubectl-520 namespace.\n" Apr 17 23:55:21.098: INFO: stdout: "" Apr 17 23:55:21.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-520 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 17 23:55:21.201: INFO: stderr: "" Apr 17 23:55:21.201: INFO: stdout: "update-demo-nautilus-b769x\nupdate-demo-nautilus-w68j2\n" Apr 17 23:55:21.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-520' Apr 17 23:55:21.799: INFO: stderr: "No resources found in kubectl-520 namespace.\n" Apr 17 23:55:21.799: INFO: stdout: "" Apr 17 23:55:21.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-520 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 17 23:55:21.930: INFO: stderr: "" Apr 17 23:55:21.930: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:55:21.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-520" for this suite. • [SLOW TEST:20.570 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":74,"skipped":1044,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:55:21.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-7e0e2c46-bb2b-4a50-aa80-15392ee1b4a8 STEP: Creating a pod to test consume secrets Apr 17 23:55:22.106: INFO: Waiting up to 5m0s for pod "pod-secrets-8d841b22-491a-4824-9583-12dce9b6139e" in namespace "secrets-8887" to be "Succeeded or Failed" Apr 17 23:55:22.129: INFO: Pod "pod-secrets-8d841b22-491a-4824-9583-12dce9b6139e": Phase="Pending", Reason="", readiness=false. Elapsed: 23.034797ms Apr 17 23:55:24.133: INFO: Pod "pod-secrets-8d841b22-491a-4824-9583-12dce9b6139e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027129875s Apr 17 23:55:26.598: INFO: Pod "pod-secrets-8d841b22-491a-4824-9583-12dce9b6139e": Phase="Running", Reason="", readiness=true. Elapsed: 4.492424294s Apr 17 23:55:28.602: INFO: Pod "pod-secrets-8d841b22-491a-4824-9583-12dce9b6139e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.496442436s STEP: Saw pod success Apr 17 23:55:28.602: INFO: Pod "pod-secrets-8d841b22-491a-4824-9583-12dce9b6139e" satisfied condition "Succeeded or Failed" Apr 17 23:55:28.605: INFO: Trying to get logs from node latest-worker pod pod-secrets-8d841b22-491a-4824-9583-12dce9b6139e container secret-volume-test: STEP: delete the pod Apr 17 23:55:28.660: INFO: Waiting for pod pod-secrets-8d841b22-491a-4824-9583-12dce9b6139e to disappear Apr 17 23:55:28.665: INFO: Pod pod-secrets-8d841b22-491a-4824-9583-12dce9b6139e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:55:28.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8887" for this suite. • [SLOW TEST:6.735 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1058,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:55:28.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 17 23:55:28.739: INFO: Waiting up to 5m0s for pod "pod-1bff1480-70ae-4d1e-b561-d33136a61d3b" in namespace "emptydir-354" to be "Succeeded or Failed" Apr 17 23:55:28.758: INFO: Pod "pod-1bff1480-70ae-4d1e-b561-d33136a61d3b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.708118ms Apr 17 23:55:30.763: INFO: Pod "pod-1bff1480-70ae-4d1e-b561-d33136a61d3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023954365s Apr 17 23:55:32.819: INFO: Pod "pod-1bff1480-70ae-4d1e-b561-d33136a61d3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080272301s STEP: Saw pod success Apr 17 23:55:32.819: INFO: Pod "pod-1bff1480-70ae-4d1e-b561-d33136a61d3b" satisfied condition "Succeeded or Failed" Apr 17 23:55:32.840: INFO: Trying to get logs from node latest-worker pod pod-1bff1480-70ae-4d1e-b561-d33136a61d3b container test-container: STEP: delete the pod Apr 17 23:55:32.893: INFO: Waiting for pod pod-1bff1480-70ae-4d1e-b561-d33136a61d3b to disappear Apr 17 23:55:32.914: INFO: Pod pod-1bff1480-70ae-4d1e-b561-d33136a61d3b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:55:32.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-354" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1083,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:55:32.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-d93dc4e7-b283-471f-867c-a0a153e832fd STEP: Creating a pod to test consume secrets Apr 17 23:55:33.070: INFO: Waiting up to 5m0s for pod "pod-secrets-9864037d-fd0c-44b5-af1b-a252c4805939" in namespace "secrets-9881" to be "Succeeded or Failed" Apr 17 23:55:33.075: INFO: Pod "pod-secrets-9864037d-fd0c-44b5-af1b-a252c4805939": Phase="Pending", Reason="", readiness=false. Elapsed: 4.759532ms Apr 17 23:55:35.167: INFO: Pod "pod-secrets-9864037d-fd0c-44b5-af1b-a252c4805939": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096651342s Apr 17 23:55:37.171: INFO: Pod "pod-secrets-9864037d-fd0c-44b5-af1b-a252c4805939": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100697326s STEP: Saw pod success Apr 17 23:55:37.171: INFO: Pod "pod-secrets-9864037d-fd0c-44b5-af1b-a252c4805939" satisfied condition "Succeeded or Failed" Apr 17 23:55:37.173: INFO: Trying to get logs from node latest-worker pod pod-secrets-9864037d-fd0c-44b5-af1b-a252c4805939 container secret-volume-test: STEP: delete the pod Apr 17 23:55:37.229: INFO: Waiting for pod pod-secrets-9864037d-fd0c-44b5-af1b-a252c4805939 to disappear Apr 17 23:55:37.236: INFO: Pod pod-secrets-9864037d-fd0c-44b5-af1b-a252c4805939 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:55:37.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9881" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:55:37.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 17 23:55:37.383: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-a f62fe991-732b-447a-86cf-b84581911b3d 8926361 0 2020-04-17 23:55:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 23:55:37.383: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-a f62fe991-732b-447a-86cf-b84581911b3d 8926361 0 2020-04-17 23:55:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 17 23:55:47.392: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-a f62fe991-732b-447a-86cf-b84581911b3d 8926404 0 2020-04-17 23:55:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 23:55:47.392: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-a f62fe991-732b-447a-86cf-b84581911b3d 8926404 0 2020-04-17 23:55:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 17 23:55:57.399: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-a f62fe991-732b-447a-86cf-b84581911b3d 8926436 0 2020-04-17 23:55:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 23:55:57.400: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-a f62fe991-732b-447a-86cf-b84581911b3d 8926436 0 2020-04-17 23:55:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 17 23:56:07.406: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-a f62fe991-732b-447a-86cf-b84581911b3d 8926466 0 2020-04-17 23:55:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 23:56:07.406: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-a f62fe991-732b-447a-86cf-b84581911b3d 8926466 0 2020-04-17 23:55:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 17 23:56:17.416: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-b 587e0f9e-26c6-40c3-810e-062f33d8b662 8926496 0 2020-04-17 23:56:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 23:56:17.416: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-b 587e0f9e-26c6-40c3-810e-062f33d8b662 8926496 0 2020-04-17 23:56:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 17 23:56:27.423: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-b 587e0f9e-26c6-40c3-810e-062f33d8b662 8926526 0 2020-04-17 23:56:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 23:56:27.424: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9040 /api/v1/namespaces/watch-9040/configmaps/e2e-watch-test-configmap-b 587e0f9e-26c6-40c3-810e-062f33d8b662 8926526 0 2020-04-17 23:56:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:56:37.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9040" for this suite. • [SLOW TEST:60.167 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":78,"skipped":1125,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:56:37.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 17 23:56:37.526: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"84a072d1-3827-468e-9d41-f48ed6538160", Controller:(*bool)(0xc00341bd16), BlockOwnerDeletion:(*bool)(0xc00341bd17)}} Apr 17 23:56:37.581: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"73874c98-4a7a-43bb-85ff-6cf7ad8ec419", Controller:(*bool)(0xc003c317f6), BlockOwnerDeletion:(*bool)(0xc003c317f7)}} Apr 17 23:56:37.640: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b768910a-c7e4-4227-9f88-dcc02a337a70", Controller:(*bool)(0xc003c319b6), BlockOwnerDeletion:(*bool)(0xc003c319b7)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:56:42.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-69" for this suite. • [SLOW TEST:5.234 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":79,"skipped":1127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:56:42.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e4bab04d-41bd-43e4-99c9-39e74eb6b827 STEP: Creating a pod to test consume secrets Apr 17 23:56:42.734: INFO: Waiting up to 5m0s for pod "pod-secrets-81782eea-93ce-4738-b624-81a01a286753" in namespace "secrets-1199" to be "Succeeded or Failed" Apr 17 23:56:42.747: INFO: Pod "pod-secrets-81782eea-93ce-4738-b624-81a01a286753": Phase="Pending", Reason="", readiness=false. Elapsed: 12.442109ms Apr 17 23:56:44.751: INFO: Pod "pod-secrets-81782eea-93ce-4738-b624-81a01a286753": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016790765s Apr 17 23:56:46.756: INFO: Pod "pod-secrets-81782eea-93ce-4738-b624-81a01a286753": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02157898s STEP: Saw pod success Apr 17 23:56:46.756: INFO: Pod "pod-secrets-81782eea-93ce-4738-b624-81a01a286753" satisfied condition "Succeeded or Failed" Apr 17 23:56:46.759: INFO: Trying to get logs from node latest-worker pod pod-secrets-81782eea-93ce-4738-b624-81a01a286753 container secret-env-test: STEP: delete the pod Apr 17 23:56:46.810: INFO: Waiting for pod pod-secrets-81782eea-93ce-4738-b624-81a01a286753 to disappear Apr 17 23:56:46.816: INFO: Pod pod-secrets-81782eea-93ce-4738-b624-81a01a286753 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:56:46.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1199" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1206,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:56:46.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 17 23:56:46.892: INFO: namespace kubectl-1847 Apr 17 23:56:46.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1847' Apr 17 23:56:47.165: INFO: stderr: "" Apr 17 23:56:47.165: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 17 23:56:48.203: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 23:56:48.203: INFO: Found 0 / 1 Apr 17 23:56:49.263: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 23:56:49.263: INFO: Found 0 / 1 Apr 17 23:56:50.300: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 23:56:50.300: INFO: Found 0 / 1 Apr 17 23:56:51.170: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 23:56:51.170: INFO: Found 0 / 1 Apr 17 23:56:52.170: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 23:56:52.170: INFO: Found 1 / 1 Apr 17 23:56:52.170: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 17 23:56:52.174: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 23:56:52.174: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 17 23:56:52.174: INFO: wait on agnhost-master startup in kubectl-1847 Apr 17 23:56:52.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-4w69s agnhost-master --namespace=kubectl-1847' Apr 17 23:56:52.304: INFO: stderr: "" Apr 17 23:56:52.304: INFO: stdout: "Paused\n" STEP: exposing RC Apr 17 23:56:52.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1847' Apr 17 23:56:52.460: INFO: stderr: "" Apr 17 23:56:52.460: INFO: stdout: "service/rm2 exposed\n" Apr 17 23:56:52.466: INFO: Service rm2 in namespace kubectl-1847 found. STEP: exposing service Apr 17 23:56:54.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1847' Apr 17 23:56:54.596: INFO: stderr: "" Apr 17 23:56:54.596: INFO: stdout: "service/rm3 exposed\n" Apr 17 23:56:54.628: INFO: Service rm3 in namespace kubectl-1847 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:56:56.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1847" for this suite. • [SLOW TEST:9.833 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":81,"skipped":1227,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:56:56.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-04bf026d-4b99-432a-bfcd-7bf77fa88ec0 STEP: Creating a pod to test consume secrets Apr 17 23:56:56.725: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cea59b84-7f28-474a-87f6-2d9bb1103aae" in namespace "projected-3600" to be "Succeeded or Failed" Apr 17 23:56:56.729: INFO: Pod "pod-projected-secrets-cea59b84-7f28-474a-87f6-2d9bb1103aae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.766716ms Apr 17 23:56:58.766: INFO: Pod "pod-projected-secrets-cea59b84-7f28-474a-87f6-2d9bb1103aae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041342334s Apr 17 23:57:01.036: INFO: Pod "pod-projected-secrets-cea59b84-7f28-474a-87f6-2d9bb1103aae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.310766271s STEP: Saw pod success Apr 17 23:57:01.036: INFO: Pod "pod-projected-secrets-cea59b84-7f28-474a-87f6-2d9bb1103aae" satisfied condition "Succeeded or Failed" Apr 17 23:57:01.039: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-cea59b84-7f28-474a-87f6-2d9bb1103aae container secret-volume-test: STEP: delete the pod Apr 17 23:57:01.359: INFO: Waiting for pod pod-projected-secrets-cea59b84-7f28-474a-87f6-2d9bb1103aae to disappear Apr 17 23:57:01.370: INFO: Pod pod-projected-secrets-cea59b84-7f28-474a-87f6-2d9bb1103aae no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:57:01.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3600" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:57:01.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 17 23:57:01.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-3317 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 17 23:57:01.519: INFO: stderr: "" Apr 17 23:57:01.519: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 17 23:57:01.519: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 17 23:57:01.519: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3317" to be "running and ready, or succeeded" Apr 17 23:57:01.532: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.885073ms Apr 17 23:57:03.536: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017051148s Apr 17 23:57:05.543: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.023360083s Apr 17 23:57:05.543: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 17 23:57:05.543: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 17 23:57:05.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3317' Apr 17 23:57:05.646: INFO: stderr: "" Apr 17 23:57:05.646: INFO: stdout: "I0417 23:57:03.921489 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/fqtc 475\nI0417 23:57:04.121641 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/gnj 469\nI0417 23:57:04.321717 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/pnrk 205\nI0417 23:57:04.521686 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/fv4 490\nI0417 23:57:04.721706 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/c7sw 407\nI0417 23:57:04.921732 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/h6tz 294\nI0417 23:57:05.121741 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/bkqd 442\nI0417 23:57:05.321684 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/87t6 264\nI0417 23:57:05.523248 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/4tj 202\n" STEP: limiting log lines Apr 17 23:57:05.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3317 --tail=1' Apr 17 23:57:05.755: INFO: stderr: "" Apr 17 23:57:05.755: INFO: stdout: "I0417 23:57:05.721714 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/lrzf 436\n" Apr 17 23:57:05.755: INFO: got output "I0417 23:57:05.721714 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/lrzf 436\n" STEP: limiting log bytes Apr 17 23:57:05.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3317 --limit-bytes=1' Apr 17 23:57:05.852: INFO: stderr: "" Apr 17 23:57:05.852: INFO: stdout: "I" Apr 17 23:57:05.852: INFO: got output "I" STEP: exposing timestamps Apr 17 23:57:05.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3317 --tail=1 --timestamps' Apr 17 23:57:05.960: INFO: stderr: "" Apr 17 23:57:05.960: INFO: stdout: "2020-04-17T23:57:05.921906061Z I0417 23:57:05.921683 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/z9j5 357\n" Apr 17 23:57:05.960: INFO: got output "2020-04-17T23:57:05.921906061Z I0417 23:57:05.921683 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/z9j5 357\n" STEP: restricting to a time range Apr 17 23:57:08.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3317 --since=1s' Apr 17 23:57:08.577: INFO: stderr: "" Apr 17 23:57:08.577: INFO: stdout: "I0417 23:57:07.721699 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/vp8j 534\nI0417 23:57:07.921659 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/whnq 240\nI0417 23:57:08.121715 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/ftx 279\nI0417 23:57:08.321721 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/qcd 457\nI0417 23:57:08.521743 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/d52 270\n" Apr 17 23:57:08.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3317 --since=24h' Apr 17 23:57:08.678: INFO: stderr: "" Apr 17 23:57:08.678: INFO: stdout: "I0417 23:57:03.921489 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/fqtc 475\nI0417 23:57:04.121641 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/gnj 469\nI0417 23:57:04.321717 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/pnrk 205\nI0417 23:57:04.521686 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/fv4 490\nI0417 23:57:04.721706 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/c7sw 407\nI0417 23:57:04.921732 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/h6tz 294\nI0417 23:57:05.121741 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/bkqd 442\nI0417 23:57:05.321684 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/87t6 264\nI0417 23:57:05.523248 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/4tj 202\nI0417 23:57:05.721714 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/lrzf 436\nI0417 23:57:05.921683 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/z9j5 357\nI0417 23:57:06.121704 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/sz8v 277\nI0417 23:57:06.321664 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/xv8 377\nI0417 23:57:06.521677 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/ngcc 524\nI0417 23:57:06.721703 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/nrz 254\nI0417 23:57:06.921704 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/g4w 533\nI0417 23:57:07.121672 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/mfs 318\nI0417 23:57:07.321692 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/n54 227\nI0417 23:57:07.521685 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/72tc 323\nI0417 23:57:07.721699 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/vp8j 534\nI0417 23:57:07.921659 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/whnq 240\nI0417 23:57:08.121715 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/ftx 279\nI0417 23:57:08.321721 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/qcd 457\nI0417 23:57:08.521743 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/d52 270\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 17 23:57:08.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3317' Apr 17 23:57:11.079: INFO: stderr: "" Apr 17 23:57:11.079: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:57:11.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3317" for this suite. • [SLOW TEST:9.708 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":83,"skipped":1339,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:57:11.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 17 23:57:19.195: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 17 23:57:19.216: INFO: Pod pod-with-prestop-exec-hook still exists Apr 17 23:57:21.216: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 17 23:57:21.227: INFO: Pod pod-with-prestop-exec-hook still exists Apr 17 23:57:23.216: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 17 23:57:23.220: INFO: Pod pod-with-prestop-exec-hook still exists Apr 17 23:57:25.216: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 17 23:57:25.220: INFO: Pod pod-with-prestop-exec-hook still exists Apr 17 23:57:27.216: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 17 23:57:27.220: INFO: Pod pod-with-prestop-exec-hook still exists Apr 17 23:57:29.216: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 17 23:57:29.220: INFO: Pod pod-with-prestop-exec-hook still exists Apr 17 23:57:31.216: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 17 23:57:31.220: INFO: Pod pod-with-prestop-exec-hook still exists Apr 17 23:57:33.216: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 17 23:57:33.219: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:57:33.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9890" for this suite. • [SLOW TEST:22.147 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:57:33.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 17 23:57:34.005: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 23:57:36.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722764654, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722764654, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722764654, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722764653, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 17 23:57:39.339: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:57:49.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1565" for this suite. STEP: Destroying namespace "webhook-1565-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.342 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":85,"skipped":1389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:57:49.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 17 23:57:49.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9774" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":86,"skipped":1415,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 17 23:57:49.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-8ac22327-8ea6-4208-88cb-58049de5c660 in namespace container-probe-5749 Apr 17 23:57:53.950: INFO: Started pod liveness-8ac22327-8ea6-4208-88cb-58049de5c660 in namespace container-probe-5749 STEP: checking the pod's current state and verifying that restartCount is present Apr 17 23:57:53.953: INFO: Initial restart count of pod liveness-8ac22327-8ea6-4208-88cb-58049de5c660 is 0 Apr 17 23:58:05.978: INFO: Restart count of pod container-probe-5749/liveness-8ac22327-8ea6-4208-88cb-58049de5c660 is now 1 (12.025452316s elapsed) Apr 17 23:58:26.036: INFO: Restart count of pod container-probe-5749/liveness-8ac22327-8ea6-4208-88cb-58049de5c660 is now 2 (32.082886851s elapsed) Apr 17 23:58:46.078: INFO: Restart count of pod container-probe-5749/liveness-8ac22327-8ea6-4208-88cb-58049de5c660 is now 3 (52.125005779s elapsed) Apr 17 23:59:06.174: INFO: Restart count of pod container-probe-5749/liveness-8ac22327-8ea6-4208-88cb-58049de5c660 is now 4 (1m12.221150439s elapsed) Apr 18 00:00:06.342: INFO: Restart count of pod container-probe-5749/liveness-8ac22327-8ea6-4208-88cb-58049de5c660 is now 5 (2m12.389158599s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:00:06.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5749" for this suite. • [SLOW TEST:136.617 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1432,"failed":0} S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:00:06.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-6e30295c-1d35-49ed-b8db-162564113a23 STEP: Creating configMap with name cm-test-opt-upd-a22ac832-ef30-4522-b34c-26b9a89a45e6 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6e30295c-1d35-49ed-b8db-162564113a23 STEP: Updating configmap cm-test-opt-upd-a22ac832-ef30-4522-b34c-26b9a89a45e6 STEP: Creating configMap with name cm-test-opt-create-0128232e-b0b7-466c-8b29-a13905fe5e72 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:01:15.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6438" for this suite. • [SLOW TEST:68.771 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1433,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:01:15.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-604 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-604 STEP: Creating statefulset with conflicting port in namespace statefulset-604 STEP: Waiting until pod test-pod will start running in namespace statefulset-604 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-604 Apr 18 00:01:19.338: INFO: Observed stateful pod in namespace: statefulset-604, name: ss-0, uid: 8d2cc55c-4048-4671-af75-be0f4190584c, status phase: Pending. Waiting for statefulset controller to delete. Apr 18 00:01:22.726: INFO: Observed stateful pod in namespace: statefulset-604, name: ss-0, uid: 8d2cc55c-4048-4671-af75-be0f4190584c, status phase: Failed. Waiting for statefulset controller to delete. Apr 18 00:01:22.747: INFO: Observed stateful pod in namespace: statefulset-604, name: ss-0, uid: 8d2cc55c-4048-4671-af75-be0f4190584c, status phase: Failed. Waiting for statefulset controller to delete. Apr 18 00:01:22.770: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-604 STEP: Removing pod with conflicting port in namespace statefulset-604 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-604 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 18 00:01:26.890: INFO: Deleting all statefulset in ns statefulset-604 Apr 18 00:01:26.893: INFO: Scaling statefulset ss to 0 Apr 18 00:01:36.928: INFO: Waiting for statefulset status.replicas updated to 0 Apr 18 00:01:36.931: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:01:36.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-604" for this suite. • [SLOW TEST:21.760 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":89,"skipped":1446,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:01:36.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:01:48.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7632" for this suite. • [SLOW TEST:11.099 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":90,"skipped":1447,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:01:48.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8439 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 18 00:01:48.165: INFO: Found 0 stateful pods, waiting for 3 Apr 18 00:01:58.358: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 18 00:01:58.358: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 18 00:01:58.358: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 18 00:02:08.170: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 18 00:02:08.170: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 18 00:02:08.170: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 18 00:02:08.197: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 18 00:02:18.247: INFO: Updating stateful set ss2 Apr 18 00:02:18.268: INFO: Waiting for Pod statefulset-8439/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 18 00:02:28.406: INFO: Found 2 stateful pods, waiting for 3 Apr 18 00:02:38.411: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 18 00:02:38.411: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 18 00:02:38.411: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 18 00:02:38.435: INFO: Updating stateful set ss2 Apr 18 00:02:38.449: INFO: Waiting for Pod statefulset-8439/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 18 00:02:48.474: INFO: Updating stateful set ss2 Apr 18 00:02:48.490: INFO: Waiting for StatefulSet statefulset-8439/ss2 to complete update Apr 18 00:02:48.490: INFO: Waiting for Pod statefulset-8439/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 18 00:02:58.496: INFO: Deleting all statefulset in ns statefulset-8439 Apr 18 00:02:58.499: INFO: Scaling statefulset ss2 to 0 Apr 18 00:03:18.512: INFO: Waiting for statefulset status.replicas updated to 0 Apr 18 00:03:18.515: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:03:18.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8439" for this suite. • [SLOW TEST:90.482 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":91,"skipped":1466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:03:18.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-j5ht STEP: Creating a pod to test atomic-volume-subpath Apr 18 00:03:18.636: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-j5ht" in namespace "subpath-5555" to be "Succeeded or Failed" Apr 18 00:03:18.646: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031046ms Apr 18 00:03:20.651: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0149498s Apr 18 00:03:22.656: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Running", Reason="", readiness=true. Elapsed: 4.019535847s Apr 18 00:03:24.660: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Running", Reason="", readiness=true. Elapsed: 6.023647146s Apr 18 00:03:26.664: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Running", Reason="", readiness=true. Elapsed: 8.027720239s Apr 18 00:03:28.668: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Running", Reason="", readiness=true. Elapsed: 10.031615774s Apr 18 00:03:30.672: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Running", Reason="", readiness=true. Elapsed: 12.035540571s Apr 18 00:03:32.676: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Running", Reason="", readiness=true. Elapsed: 14.039532807s Apr 18 00:03:34.680: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Running", Reason="", readiness=true. Elapsed: 16.043820924s Apr 18 00:03:36.684: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Running", Reason="", readiness=true. Elapsed: 18.047659332s Apr 18 00:03:38.687: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Running", Reason="", readiness=true. Elapsed: 20.051062233s Apr 18 00:03:40.690: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Running", Reason="", readiness=true. Elapsed: 22.053856666s Apr 18 00:03:42.693: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Running", Reason="", readiness=true. Elapsed: 24.057044329s Apr 18 00:03:44.697: INFO: Pod "pod-subpath-test-projected-j5ht": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061316503s STEP: Saw pod success Apr 18 00:03:44.697: INFO: Pod "pod-subpath-test-projected-j5ht" satisfied condition "Succeeded or Failed" Apr 18 00:03:44.700: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-j5ht container test-container-subpath-projected-j5ht: STEP: delete the pod Apr 18 00:03:44.755: INFO: Waiting for pod pod-subpath-test-projected-j5ht to disappear Apr 18 00:03:44.772: INFO: Pod pod-subpath-test-projected-j5ht no longer exists STEP: Deleting pod pod-subpath-test-projected-j5ht Apr 18 00:03:44.772: INFO: Deleting pod "pod-subpath-test-projected-j5ht" in namespace "subpath-5555" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:03:44.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5555" for this suite. • [SLOW TEST:26.260 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":92,"skipped":1523,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:03:44.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 18 00:03:44.850: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 18 00:03:49.856: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:03:49.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-926" for this suite. • [SLOW TEST:5.163 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":93,"skipped":1526,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:03:49.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 18 00:03:54.610: INFO: Successfully updated pod "annotationupdate351b8c8c-35b5-4db4-858c-7a32339006ab" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:03:56.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3514" for this suite. • [SLOW TEST:6.697 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1534,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:03:56.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 18 00:04:06.766: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9952 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:04:06.766: INFO: >>> kubeConfig: /root/.kube/config I0418 00:04:06.804143 7 log.go:172] (0xc002f0c370) (0xc000c9e320) Create stream I0418 00:04:06.804172 7 log.go:172] (0xc002f0c370) (0xc000c9e320) Stream added, broadcasting: 1 I0418 00:04:06.806696 7 log.go:172] (0xc002f0c370) Reply frame received for 1 I0418 00:04:06.806748 7 log.go:172] (0xc002f0c370) (0xc000c801e0) Create stream I0418 00:04:06.806769 7 log.go:172] (0xc002f0c370) (0xc000c801e0) Stream added, broadcasting: 3 I0418 00:04:06.807762 7 log.go:172] (0xc002f0c370) Reply frame received for 3 I0418 00:04:06.807807 7 log.go:172] (0xc002f0c370) (0xc0017d5220) Create stream I0418 00:04:06.807823 7 log.go:172] (0xc002f0c370) (0xc0017d5220) Stream added, broadcasting: 5 I0418 00:04:06.808993 7 log.go:172] (0xc002f0c370) Reply frame received for 5 I0418 00:04:06.902783 7 log.go:172] (0xc002f0c370) Data frame received for 5 I0418 00:04:06.902824 7 log.go:172] (0xc0017d5220) (5) Data frame handling I0418 00:04:06.902854 7 log.go:172] (0xc002f0c370) Data frame received for 3 I0418 00:04:06.902866 7 log.go:172] (0xc000c801e0) (3) Data frame handling I0418 00:04:06.902894 7 log.go:172] (0xc000c801e0) (3) Data frame sent I0418 00:04:06.902908 7 log.go:172] (0xc002f0c370) Data frame received for 3 I0418 00:04:06.902918 7 log.go:172] (0xc000c801e0) (3) Data frame handling I0418 00:04:06.904926 7 log.go:172] (0xc002f0c370) Data frame received for 1 I0418 00:04:06.904972 7 log.go:172] (0xc000c9e320) (1) Data frame handling I0418 00:04:06.905063 7 log.go:172] (0xc000c9e320) (1) Data frame sent I0418 00:04:06.905097 7 log.go:172] (0xc002f0c370) (0xc000c9e320) Stream removed, broadcasting: 1 I0418 00:04:06.905291 7 log.go:172] (0xc002f0c370) Go away received I0418 00:04:06.905407 7 log.go:172] (0xc002f0c370) (0xc000c9e320) Stream removed, broadcasting: 1 I0418 00:04:06.905441 7 log.go:172] (0xc002f0c370) (0xc000c801e0) Stream removed, broadcasting: 3 I0418 00:04:06.905455 7 log.go:172] (0xc002f0c370) (0xc0017d5220) Stream removed, broadcasting: 5 Apr 18 00:04:06.905: INFO: Exec stderr: "" Apr 18 00:04:06.905: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9952 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:04:06.905: INFO: >>> kubeConfig: /root/.kube/config I0418 00:04:06.932200 7 log.go:172] (0xc002ccc6e0) (0xc0017d5a40) Create stream I0418 00:04:06.932229 7 log.go:172] (0xc002ccc6e0) (0xc0017d5a40) Stream added, broadcasting: 1 I0418 00:04:06.934417 7 log.go:172] (0xc002ccc6e0) Reply frame received for 1 I0418 00:04:06.934457 7 log.go:172] (0xc002ccc6e0) (0xc001345360) Create stream I0418 00:04:06.934472 7 log.go:172] (0xc002ccc6e0) (0xc001345360) Stream added, broadcasting: 3 I0418 00:04:06.935372 7 log.go:172] (0xc002ccc6e0) Reply frame received for 3 I0418 00:04:06.935393 7 log.go:172] (0xc002ccc6e0) (0xc0017d5b80) Create stream I0418 00:04:06.935399 7 log.go:172] (0xc002ccc6e0) (0xc0017d5b80) Stream added, broadcasting: 5 I0418 00:04:06.936257 7 log.go:172] (0xc002ccc6e0) Reply frame received for 5 I0418 00:04:06.990968 7 log.go:172] (0xc002ccc6e0) Data frame received for 3 I0418 00:04:06.991009 7 log.go:172] (0xc001345360) (3) Data frame handling I0418 00:04:06.991019 7 log.go:172] (0xc001345360) (3) Data frame sent I0418 00:04:06.991026 7 log.go:172] (0xc002ccc6e0) Data frame received for 3 I0418 00:04:06.991036 7 log.go:172] (0xc001345360) (3) Data frame handling I0418 00:04:06.991053 7 log.go:172] (0xc002ccc6e0) Data frame received for 5 I0418 00:04:06.991066 7 log.go:172] (0xc0017d5b80) (5) Data frame handling I0418 00:04:06.992509 7 log.go:172] (0xc002ccc6e0) Data frame received for 1 I0418 00:04:06.992583 7 log.go:172] (0xc0017d5a40) (1) Data frame handling I0418 00:04:06.992631 7 log.go:172] (0xc0017d5a40) (1) Data frame sent I0418 00:04:06.992656 7 log.go:172] (0xc002ccc6e0) (0xc0017d5a40) Stream removed, broadcasting: 1 I0418 00:04:06.992679 7 log.go:172] (0xc002ccc6e0) Go away received I0418 00:04:06.992818 7 log.go:172] (0xc002ccc6e0) (0xc0017d5a40) Stream removed, broadcasting: 1 I0418 00:04:06.992834 7 log.go:172] (0xc002ccc6e0) (0xc001345360) Stream removed, broadcasting: 3 I0418 00:04:06.992841 7 log.go:172] (0xc002ccc6e0) (0xc0017d5b80) Stream removed, broadcasting: 5 Apr 18 00:04:06.992: INFO: Exec stderr: "" Apr 18 00:04:06.992: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9952 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:04:06.992: INFO: >>> kubeConfig: /root/.kube/config I0418 00:04:07.018468 7 log.go:172] (0xc002c328f0) (0xc001345c20) Create stream I0418 00:04:07.018503 7 log.go:172] (0xc002c328f0) (0xc001345c20) Stream added, broadcasting: 1 I0418 00:04:07.020505 7 log.go:172] (0xc002c328f0) Reply frame received for 1 I0418 00:04:07.020536 7 log.go:172] (0xc002c328f0) (0xc0017d5d60) Create stream I0418 00:04:07.020547 7 log.go:172] (0xc002c328f0) (0xc0017d5d60) Stream added, broadcasting: 3 I0418 00:04:07.021536 7 log.go:172] (0xc002c328f0) Reply frame received for 3 I0418 00:04:07.021568 7 log.go:172] (0xc002c328f0) (0xc000c808c0) Create stream I0418 00:04:07.021578 7 log.go:172] (0xc002c328f0) (0xc000c808c0) Stream added, broadcasting: 5 I0418 00:04:07.022218 7 log.go:172] (0xc002c328f0) Reply frame received for 5 I0418 00:04:07.096823 7 log.go:172] (0xc002c328f0) Data frame received for 3 I0418 00:04:07.096848 7 log.go:172] (0xc0017d5d60) (3) Data frame handling I0418 00:04:07.096863 7 log.go:172] (0xc0017d5d60) (3) Data frame sent I0418 00:04:07.096869 7 log.go:172] (0xc002c328f0) Data frame received for 3 I0418 00:04:07.096879 7 log.go:172] (0xc0017d5d60) (3) Data frame handling I0418 00:04:07.096966 7 log.go:172] (0xc002c328f0) Data frame received for 5 I0418 00:04:07.096991 7 log.go:172] (0xc000c808c0) (5) Data frame handling I0418 00:04:07.098683 7 log.go:172] (0xc002c328f0) Data frame received for 1 I0418 00:04:07.098703 7 log.go:172] (0xc001345c20) (1) Data frame handling I0418 00:04:07.098730 7 log.go:172] (0xc001345c20) (1) Data frame sent I0418 00:04:07.098743 7 log.go:172] (0xc002c328f0) (0xc001345c20) Stream removed, broadcasting: 1 I0418 00:04:07.098763 7 log.go:172] (0xc002c328f0) Go away received I0418 00:04:07.098809 7 log.go:172] (0xc002c328f0) (0xc001345c20) Stream removed, broadcasting: 1 I0418 00:04:07.098819 7 log.go:172] (0xc002c328f0) (0xc0017d5d60) Stream removed, broadcasting: 3 I0418 00:04:07.098828 7 log.go:172] (0xc002c328f0) (0xc000c808c0) Stream removed, broadcasting: 5 Apr 18 00:04:07.098: INFO: Exec stderr: "" Apr 18 00:04:07.098: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9952 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:04:07.098: INFO: >>> kubeConfig: /root/.kube/config I0418 00:04:07.130114 7 log.go:172] (0xc002ada580) (0xc000ba6a00) Create stream I0418 00:04:07.130149 7 log.go:172] (0xc002ada580) (0xc000ba6a00) Stream added, broadcasting: 1 I0418 00:04:07.132670 7 log.go:172] (0xc002ada580) Reply frame received for 1 I0418 00:04:07.132705 7 log.go:172] (0xc002ada580) (0xc001345d60) Create stream I0418 00:04:07.132718 7 log.go:172] (0xc002ada580) (0xc001345d60) Stream added, broadcasting: 3 I0418 00:04:07.134216 7 log.go:172] (0xc002ada580) Reply frame received for 3 I0418 00:04:07.134252 7 log.go:172] (0xc002ada580) (0xc000c9e640) Create stream I0418 00:04:07.134265 7 log.go:172] (0xc002ada580) (0xc000c9e640) Stream added, broadcasting: 5 I0418 00:04:07.135370 7 log.go:172] (0xc002ada580) Reply frame received for 5 I0418 00:04:07.196708 7 log.go:172] (0xc002ada580) Data frame received for 3 I0418 00:04:07.196777 7 log.go:172] (0xc001345d60) (3) Data frame handling I0418 00:04:07.196796 7 log.go:172] (0xc001345d60) (3) Data frame sent I0418 00:04:07.196820 7 log.go:172] (0xc002ada580) Data frame received for 5 I0418 00:04:07.196860 7 log.go:172] (0xc000c9e640) (5) Data frame handling I0418 00:04:07.196890 7 log.go:172] (0xc002ada580) Data frame received for 3 I0418 00:04:07.196905 7 log.go:172] (0xc001345d60) (3) Data frame handling I0418 00:04:07.198884 7 log.go:172] (0xc002ada580) Data frame received for 1 I0418 00:04:07.198899 7 log.go:172] (0xc000ba6a00) (1) Data frame handling I0418 00:04:07.198912 7 log.go:172] (0xc000ba6a00) (1) Data frame sent I0418 00:04:07.198927 7 log.go:172] (0xc002ada580) (0xc000ba6a00) Stream removed, broadcasting: 1 I0418 00:04:07.199034 7 log.go:172] (0xc002ada580) (0xc000ba6a00) Stream removed, broadcasting: 1 I0418 00:04:07.199084 7 log.go:172] (0xc002ada580) (0xc001345d60) Stream removed, broadcasting: 3 I0418 00:04:07.199095 7 log.go:172] (0xc002ada580) (0xc000c9e640) Stream removed, broadcasting: 5 Apr 18 00:04:07.199: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 18 00:04:07.199: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9952 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:04:07.199: INFO: >>> kubeConfig: /root/.kube/config I0418 00:04:07.199313 7 log.go:172] (0xc002ada580) Go away received I0418 00:04:07.227663 7 log.go:172] (0xc002adad10) (0xc000ba72c0) Create stream I0418 00:04:07.227690 7 log.go:172] (0xc002adad10) (0xc000ba72c0) Stream added, broadcasting: 1 I0418 00:04:07.230249 7 log.go:172] (0xc002adad10) Reply frame received for 1 I0418 00:04:07.230288 7 log.go:172] (0xc002adad10) (0xc0010120a0) Create stream I0418 00:04:07.230301 7 log.go:172] (0xc002adad10) (0xc0010120a0) Stream added, broadcasting: 3 I0418 00:04:07.231278 7 log.go:172] (0xc002adad10) Reply frame received for 3 I0418 00:04:07.231303 7 log.go:172] (0xc002adad10) (0xc001012320) Create stream I0418 00:04:07.231316 7 log.go:172] (0xc002adad10) (0xc001012320) Stream added, broadcasting: 5 I0418 00:04:07.232352 7 log.go:172] (0xc002adad10) Reply frame received for 5 I0418 00:04:07.293024 7 log.go:172] (0xc002adad10) Data frame received for 3 I0418 00:04:07.293096 7 log.go:172] (0xc0010120a0) (3) Data frame handling I0418 00:04:07.293376 7 log.go:172] (0xc002adad10) Data frame received for 5 I0418 00:04:07.293543 7 log.go:172] (0xc001012320) (5) Data frame handling I0418 00:04:07.294856 7 log.go:172] (0xc002adad10) Data frame received for 1 I0418 00:04:07.294915 7 log.go:172] (0xc000ba72c0) (1) Data frame handling I0418 00:04:07.294964 7 log.go:172] (0xc000ba72c0) (1) Data frame sent I0418 00:04:07.295051 7 log.go:172] (0xc002adad10) (0xc000ba72c0) Stream removed, broadcasting: 1 I0418 00:04:07.297905 7 log.go:172] (0xc0010120a0) (3) Data frame sent I0418 00:04:07.297947 7 log.go:172] (0xc002adad10) Data frame received for 3 I0418 00:04:07.297961 7 log.go:172] (0xc0010120a0) (3) Data frame handling I0418 00:04:07.298022 7 log.go:172] (0xc002adad10) Go away received I0418 00:04:07.298119 7 log.go:172] (0xc002adad10) (0xc000ba72c0) Stream removed, broadcasting: 1 I0418 00:04:07.298141 7 log.go:172] (0xc002adad10) (0xc0010120a0) Stream removed, broadcasting: 3 I0418 00:04:07.298158 7 log.go:172] (0xc002adad10) (0xc001012320) Stream removed, broadcasting: 5 Apr 18 00:04:07.298: INFO: Exec stderr: "" Apr 18 00:04:07.298: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9952 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:04:07.298: INFO: >>> kubeConfig: /root/.kube/config I0418 00:04:07.327307 7 log.go:172] (0xc002cccd10) (0xc001152280) Create stream I0418 00:04:07.327335 7 log.go:172] (0xc002cccd10) (0xc001152280) Stream added, broadcasting: 1 I0418 00:04:07.330651 7 log.go:172] (0xc002cccd10) Reply frame received for 1 I0418 00:04:07.330713 7 log.go:172] (0xc002cccd10) (0xc001345e00) Create stream I0418 00:04:07.330734 7 log.go:172] (0xc002cccd10) (0xc001345e00) Stream added, broadcasting: 3 I0418 00:04:07.331691 7 log.go:172] (0xc002cccd10) Reply frame received for 3 I0418 00:04:07.331741 7 log.go:172] (0xc002cccd10) (0xc000ba7360) Create stream I0418 00:04:07.331764 7 log.go:172] (0xc002cccd10) (0xc000ba7360) Stream added, broadcasting: 5 I0418 00:04:07.332574 7 log.go:172] (0xc002cccd10) Reply frame received for 5 I0418 00:04:07.457490 7 log.go:172] (0xc002cccd10) Data frame received for 3 I0418 00:04:07.457518 7 log.go:172] (0xc001345e00) (3) Data frame handling I0418 00:04:07.457535 7 log.go:172] (0xc001345e00) (3) Data frame sent I0418 00:04:07.457557 7 log.go:172] (0xc002cccd10) Data frame received for 3 I0418 00:04:07.457564 7 log.go:172] (0xc001345e00) (3) Data frame handling I0418 00:04:07.457597 7 log.go:172] (0xc002cccd10) Data frame received for 5 I0418 00:04:07.457612 7 log.go:172] (0xc000ba7360) (5) Data frame handling I0418 00:04:07.458771 7 log.go:172] (0xc002cccd10) Data frame received for 1 I0418 00:04:07.458829 7 log.go:172] (0xc001152280) (1) Data frame handling I0418 00:04:07.458858 7 log.go:172] (0xc001152280) (1) Data frame sent I0418 00:04:07.459098 7 log.go:172] (0xc002cccd10) (0xc001152280) Stream removed, broadcasting: 1 I0418 00:04:07.459177 7 log.go:172] (0xc002cccd10) Go away received I0418 00:04:07.459310 7 log.go:172] (0xc002cccd10) (0xc001152280) Stream removed, broadcasting: 1 I0418 00:04:07.459378 7 log.go:172] (0xc002cccd10) (0xc001345e00) Stream removed, broadcasting: 3 I0418 00:04:07.459430 7 log.go:172] (0xc002cccd10) (0xc000ba7360) Stream removed, broadcasting: 5 Apr 18 00:04:07.459: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 18 00:04:07.459: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9952 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:04:07.459: INFO: >>> kubeConfig: /root/.kube/config I0418 00:04:07.488592 7 log.go:172] (0xc002f0c9a0) (0xc000c9eb40) Create stream I0418 00:04:07.488628 7 log.go:172] (0xc002f0c9a0) (0xc000c9eb40) Stream added, broadcasting: 1 I0418 00:04:07.490781 7 log.go:172] (0xc002f0c9a0) Reply frame received for 1 I0418 00:04:07.490823 7 log.go:172] (0xc002f0c9a0) (0xc0010123c0) Create stream I0418 00:04:07.490839 7 log.go:172] (0xc002f0c9a0) (0xc0010123c0) Stream added, broadcasting: 3 I0418 00:04:07.491774 7 log.go:172] (0xc002f0c9a0) Reply frame received for 3 I0418 00:04:07.491796 7 log.go:172] (0xc002f0c9a0) (0xc001152960) Create stream I0418 00:04:07.491805 7 log.go:172] (0xc002f0c9a0) (0xc001152960) Stream added, broadcasting: 5 I0418 00:04:07.492605 7 log.go:172] (0xc002f0c9a0) Reply frame received for 5 I0418 00:04:07.563248 7 log.go:172] (0xc002f0c9a0) Data frame received for 3 I0418 00:04:07.563277 7 log.go:172] (0xc0010123c0) (3) Data frame handling I0418 00:04:07.563303 7 log.go:172] (0xc0010123c0) (3) Data frame sent I0418 00:04:07.563325 7 log.go:172] (0xc002f0c9a0) Data frame received for 3 I0418 00:04:07.563330 7 log.go:172] (0xc0010123c0) (3) Data frame handling I0418 00:04:07.563352 7 log.go:172] (0xc002f0c9a0) Data frame received for 5 I0418 00:04:07.563366 7 log.go:172] (0xc001152960) (5) Data frame handling I0418 00:04:07.564922 7 log.go:172] (0xc002f0c9a0) Data frame received for 1 I0418 00:04:07.564963 7 log.go:172] (0xc000c9eb40) (1) Data frame handling I0418 00:04:07.565010 7 log.go:172] (0xc000c9eb40) (1) Data frame sent I0418 00:04:07.565034 7 log.go:172] (0xc002f0c9a0) (0xc000c9eb40) Stream removed, broadcasting: 1 I0418 00:04:07.565057 7 log.go:172] (0xc002f0c9a0) Go away received I0418 00:04:07.565570 7 log.go:172] (0xc002f0c9a0) (0xc000c9eb40) Stream removed, broadcasting: 1 I0418 00:04:07.565601 7 log.go:172] (0xc002f0c9a0) (0xc0010123c0) Stream removed, broadcasting: 3 I0418 00:04:07.565624 7 log.go:172] (0xc002f0c9a0) (0xc001152960) Stream removed, broadcasting: 5 Apr 18 00:04:07.565: INFO: Exec stderr: "" Apr 18 00:04:07.565: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9952 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:04:07.565: INFO: >>> kubeConfig: /root/.kube/config I0418 00:04:07.597405 7 log.go:172] (0xc002ccd340) (0xc001152c80) Create stream I0418 00:04:07.597585 7 log.go:172] (0xc002ccd340) (0xc001152c80) Stream added, broadcasting: 1 I0418 00:04:07.600515 7 log.go:172] (0xc002ccd340) Reply frame received for 1 I0418 00:04:07.600552 7 log.go:172] (0xc002ccd340) (0xc000c9ee60) Create stream I0418 00:04:07.600569 7 log.go:172] (0xc002ccd340) (0xc000c9ee60) Stream added, broadcasting: 3 I0418 00:04:07.601844 7 log.go:172] (0xc002ccd340) Reply frame received for 3 I0418 00:04:07.601900 7 log.go:172] (0xc002ccd340) (0xc000c9f180) Create stream I0418 00:04:07.601926 7 log.go:172] (0xc002ccd340) (0xc000c9f180) Stream added, broadcasting: 5 I0418 00:04:07.603249 7 log.go:172] (0xc002ccd340) Reply frame received for 5 I0418 00:04:07.663485 7 log.go:172] (0xc002ccd340) Data frame received for 3 I0418 00:04:07.663520 7 log.go:172] (0xc000c9ee60) (3) Data frame handling I0418 00:04:07.663531 7 log.go:172] (0xc000c9ee60) (3) Data frame sent I0418 00:04:07.663538 7 log.go:172] (0xc002ccd340) Data frame received for 3 I0418 00:04:07.663553 7 log.go:172] (0xc000c9ee60) (3) Data frame handling I0418 00:04:07.663581 7 log.go:172] (0xc002ccd340) Data frame received for 5 I0418 00:04:07.663593 7 log.go:172] (0xc000c9f180) (5) Data frame handling I0418 00:04:07.664965 7 log.go:172] (0xc002ccd340) Data frame received for 1 I0418 00:04:07.664989 7 log.go:172] (0xc001152c80) (1) Data frame handling I0418 00:04:07.665009 7 log.go:172] (0xc001152c80) (1) Data frame sent I0418 00:04:07.665096 7 log.go:172] (0xc002ccd340) (0xc001152c80) Stream removed, broadcasting: 1 I0418 00:04:07.665272 7 log.go:172] (0xc002ccd340) (0xc001152c80) Stream removed, broadcasting: 1 I0418 00:04:07.665287 7 log.go:172] (0xc002ccd340) (0xc000c9ee60) Stream removed, broadcasting: 3 I0418 00:04:07.665423 7 log.go:172] (0xc002ccd340) (0xc000c9f180) Stream removed, broadcasting: 5 Apr 18 00:04:07.665: INFO: Exec stderr: "" Apr 18 00:04:07.665: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9952 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:04:07.665: INFO: >>> kubeConfig: /root/.kube/config I0418 00:04:07.665931 7 log.go:172] (0xc002ccd340) Go away received I0418 00:04:07.703240 7 log.go:172] (0xc0024b4580) (0xc001012a00) Create stream I0418 00:04:07.703276 7 log.go:172] (0xc0024b4580) (0xc001012a00) Stream added, broadcasting: 1 I0418 00:04:07.705942 7 log.go:172] (0xc0024b4580) Reply frame received for 1 I0418 00:04:07.706031 7 log.go:172] (0xc0024b4580) (0xc00194c000) Create stream I0418 00:04:07.706073 7 log.go:172] (0xc0024b4580) (0xc00194c000) Stream added, broadcasting: 3 I0418 00:04:07.707242 7 log.go:172] (0xc0024b4580) Reply frame received for 3 I0418 00:04:07.707314 7 log.go:172] (0xc0024b4580) (0xc001152fa0) Create stream I0418 00:04:07.707325 7 log.go:172] (0xc0024b4580) (0xc001152fa0) Stream added, broadcasting: 5 I0418 00:04:07.708375 7 log.go:172] (0xc0024b4580) Reply frame received for 5 I0418 00:04:07.770665 7 log.go:172] (0xc0024b4580) Data frame received for 5 I0418 00:04:07.770707 7 log.go:172] (0xc001152fa0) (5) Data frame handling I0418 00:04:07.770730 7 log.go:172] (0xc0024b4580) Data frame received for 3 I0418 00:04:07.770744 7 log.go:172] (0xc00194c000) (3) Data frame handling I0418 00:04:07.770758 7 log.go:172] (0xc00194c000) (3) Data frame sent I0418 00:04:07.770769 7 log.go:172] (0xc0024b4580) Data frame received for 3 I0418 00:04:07.770780 7 log.go:172] (0xc00194c000) (3) Data frame handling I0418 00:04:07.772289 7 log.go:172] (0xc0024b4580) Data frame received for 1 I0418 00:04:07.772325 7 log.go:172] (0xc001012a00) (1) Data frame handling I0418 00:04:07.772359 7 log.go:172] (0xc001012a00) (1) Data frame sent I0418 00:04:07.772387 7 log.go:172] (0xc0024b4580) (0xc001012a00) Stream removed, broadcasting: 1 I0418 00:04:07.772419 7 log.go:172] (0xc0024b4580) Go away received I0418 00:04:07.772557 7 log.go:172] (0xc0024b4580) (0xc001012a00) Stream removed, broadcasting: 1 I0418 00:04:07.772591 7 log.go:172] (0xc0024b4580) (0xc00194c000) Stream removed, broadcasting: 3 I0418 00:04:07.772603 7 log.go:172] (0xc0024b4580) (0xc001152fa0) Stream removed, broadcasting: 5 Apr 18 00:04:07.772: INFO: Exec stderr: "" Apr 18 00:04:07.772: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9952 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:04:07.772: INFO: >>> kubeConfig: /root/.kube/config I0418 00:04:07.808657 7 log.go:172] (0xc002f0cfd0) (0xc000c9fcc0) Create stream I0418 00:04:07.808689 7 log.go:172] (0xc002f0cfd0) (0xc000c9fcc0) Stream added, broadcasting: 1 I0418 00:04:07.811406 7 log.go:172] (0xc002f0cfd0) Reply frame received for 1 I0418 00:04:07.811431 7 log.go:172] (0xc002f0cfd0) (0xc0011530e0) Create stream I0418 00:04:07.811439 7 log.go:172] (0xc002f0cfd0) (0xc0011530e0) Stream added, broadcasting: 3 I0418 00:04:07.812539 7 log.go:172] (0xc002f0cfd0) Reply frame received for 3 I0418 00:04:07.812595 7 log.go:172] (0xc002f0cfd0) (0xc001012c80) Create stream I0418 00:04:07.812613 7 log.go:172] (0xc002f0cfd0) (0xc001012c80) Stream added, broadcasting: 5 I0418 00:04:07.813984 7 log.go:172] (0xc002f0cfd0) Reply frame received for 5 I0418 00:04:07.865449 7 log.go:172] (0xc002f0cfd0) Data frame received for 5 I0418 00:04:07.865495 7 log.go:172] (0xc001012c80) (5) Data frame handling I0418 00:04:07.865525 7 log.go:172] (0xc002f0cfd0) Data frame received for 3 I0418 00:04:07.865561 7 log.go:172] (0xc0011530e0) (3) Data frame handling I0418 00:04:07.865590 7 log.go:172] (0xc0011530e0) (3) Data frame sent I0418 00:04:07.865600 7 log.go:172] (0xc002f0cfd0) Data frame received for 3 I0418 00:04:07.865612 7 log.go:172] (0xc0011530e0) (3) Data frame handling I0418 00:04:07.866951 7 log.go:172] (0xc002f0cfd0) Data frame received for 1 I0418 00:04:07.866986 7 log.go:172] (0xc000c9fcc0) (1) Data frame handling I0418 00:04:07.867008 7 log.go:172] (0xc000c9fcc0) (1) Data frame sent I0418 00:04:07.867051 7 log.go:172] (0xc002f0cfd0) (0xc000c9fcc0) Stream removed, broadcasting: 1 I0418 00:04:07.867168 7 log.go:172] (0xc002f0cfd0) Go away received I0418 00:04:07.867196 7 log.go:172] (0xc002f0cfd0) (0xc000c9fcc0) Stream removed, broadcasting: 1 I0418 00:04:07.867215 7 log.go:172] (0xc002f0cfd0) (0xc0011530e0) Stream removed, broadcasting: 3 I0418 00:04:07.867239 7 log.go:172] (0xc002f0cfd0) (0xc001012c80) Stream removed, broadcasting: 5 Apr 18 00:04:07.867: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:04:07.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9952" for this suite. • [SLOW TEST:11.219 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1547,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:04:07.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:04:07.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-328ec216-7e13-4ea0-bf4c-6b6e09325192" in namespace "projected-7222" to be "Succeeded or Failed" Apr 18 00:04:07.964: INFO: Pod "downwardapi-volume-328ec216-7e13-4ea0-bf4c-6b6e09325192": Phase="Pending", Reason="", readiness=false. Elapsed: 3.069458ms Apr 18 00:04:09.968: INFO: Pod "downwardapi-volume-328ec216-7e13-4ea0-bf4c-6b6e09325192": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006536917s Apr 18 00:04:11.972: INFO: Pod "downwardapi-volume-328ec216-7e13-4ea0-bf4c-6b6e09325192": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010888725s STEP: Saw pod success Apr 18 00:04:11.972: INFO: Pod "downwardapi-volume-328ec216-7e13-4ea0-bf4c-6b6e09325192" satisfied condition "Succeeded or Failed" Apr 18 00:04:11.975: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-328ec216-7e13-4ea0-bf4c-6b6e09325192 container client-container: STEP: delete the pod Apr 18 00:04:12.004: INFO: Waiting for pod downwardapi-volume-328ec216-7e13-4ea0-bf4c-6b6e09325192 to disappear Apr 18 00:04:12.035: INFO: Pod downwardapi-volume-328ec216-7e13-4ea0-bf4c-6b6e09325192 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:04:12.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7222" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1556,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:04:12.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:04:17.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6959" for this suite. • [SLOW TEST:5.151 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":97,"skipped":1564,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:04:17.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:04:17.674: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:04:19.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765057, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765057, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765057, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765057, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:04:22.716: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:04:22.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5505-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:04:24.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6532" for this suite. STEP: Destroying namespace "webhook-6532-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.575 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":98,"skipped":1569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:04:24.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-9950e592-92c5-4ecb-a473-cd0697cf4470 STEP: Creating a pod to test consume configMaps Apr 18 00:04:24.876: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-376cfde8-b54a-44d5-ba4f-beda4f40d3c7" in namespace "projected-6608" to be "Succeeded or Failed" Apr 18 00:04:24.899: INFO: Pod "pod-projected-configmaps-376cfde8-b54a-44d5-ba4f-beda4f40d3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.780851ms Apr 18 00:04:27.006: INFO: Pod "pod-projected-configmaps-376cfde8-b54a-44d5-ba4f-beda4f40d3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129895173s Apr 18 00:04:29.010: INFO: Pod "pod-projected-configmaps-376cfde8-b54a-44d5-ba4f-beda4f40d3c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134434252s STEP: Saw pod success Apr 18 00:04:29.011: INFO: Pod "pod-projected-configmaps-376cfde8-b54a-44d5-ba4f-beda4f40d3c7" satisfied condition "Succeeded or Failed" Apr 18 00:04:29.014: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-376cfde8-b54a-44d5-ba4f-beda4f40d3c7 container projected-configmap-volume-test: STEP: delete the pod Apr 18 00:04:29.085: INFO: Waiting for pod pod-projected-configmaps-376cfde8-b54a-44d5-ba4f-beda4f40d3c7 to disappear Apr 18 00:04:29.090: INFO: Pod pod-projected-configmaps-376cfde8-b54a-44d5-ba4f-beda4f40d3c7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:04:29.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6608" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1616,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:04:29.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 18 00:04:29.153: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2086 /api/v1/namespaces/watch-2086/configmaps/e2e-watch-test-label-changed a000d1e7-7c5d-413e-b796-559b413a59a1 8929052 0 2020-04-18 00:04:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 18 00:04:29.153: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2086 /api/v1/namespaces/watch-2086/configmaps/e2e-watch-test-label-changed a000d1e7-7c5d-413e-b796-559b413a59a1 8929053 0 2020-04-18 00:04:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 18 00:04:29.153: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2086 /api/v1/namespaces/watch-2086/configmaps/e2e-watch-test-label-changed a000d1e7-7c5d-413e-b796-559b413a59a1 8929054 0 2020-04-18 00:04:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 18 00:04:39.194: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2086 /api/v1/namespaces/watch-2086/configmaps/e2e-watch-test-label-changed a000d1e7-7c5d-413e-b796-559b413a59a1 8929106 0 2020-04-18 00:04:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 18 00:04:39.194: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2086 /api/v1/namespaces/watch-2086/configmaps/e2e-watch-test-label-changed a000d1e7-7c5d-413e-b796-559b413a59a1 8929107 0 2020-04-18 00:04:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 18 00:04:39.194: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2086 /api/v1/namespaces/watch-2086/configmaps/e2e-watch-test-label-changed a000d1e7-7c5d-413e-b796-559b413a59a1 8929108 0 2020-04-18 00:04:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:04:39.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2086" for this suite. • [SLOW TEST:10.106 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":100,"skipped":1617,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:04:39.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 18 00:04:47.344: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 18 00:04:47.401: INFO: Pod pod-with-poststart-exec-hook still exists Apr 18 00:04:49.402: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 18 00:04:49.406: INFO: Pod pod-with-poststart-exec-hook still exists Apr 18 00:04:51.402: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 18 00:04:51.406: INFO: Pod pod-with-poststart-exec-hook still exists Apr 18 00:04:53.402: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 18 00:04:53.406: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:04:53.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-991" for this suite. • [SLOW TEST:14.210 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1626,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:04:53.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-0546b9e5-71fb-4790-8f7f-b445ceb03cb5 in namespace container-probe-3541 Apr 18 00:04:57.519: INFO: Started pod liveness-0546b9e5-71fb-4790-8f7f-b445ceb03cb5 in namespace container-probe-3541 STEP: checking the pod's current state and verifying that restartCount is present Apr 18 00:04:57.522: INFO: Initial restart count of pod liveness-0546b9e5-71fb-4790-8f7f-b445ceb03cb5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:08:58.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3541" for this suite. • [SLOW TEST:245.529 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1629,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:08:58.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-4e6f9ef7-5856-4771-ac70-f5b6ae4f5dc1 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-4e6f9ef7-5856-4771-ac70-f5b6ae4f5dc1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:10:14.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5226" for this suite. • [SLOW TEST:75.622 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1637,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:10:14.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:10:14.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4481" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":104,"skipped":1649,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:10:14.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 18 00:10:14.804: INFO: Waiting up to 5m0s for pod "pod-14ed11bc-916e-47bb-b6ee-a117c6bc497b" in namespace "emptydir-6968" to be "Succeeded or Failed" Apr 18 00:10:14.843: INFO: Pod "pod-14ed11bc-916e-47bb-b6ee-a117c6bc497b": Phase="Pending", Reason="", readiness=false. Elapsed: 39.578439ms Apr 18 00:10:16.847: INFO: Pod "pod-14ed11bc-916e-47bb-b6ee-a117c6bc497b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042871017s Apr 18 00:10:18.851: INFO: Pod "pod-14ed11bc-916e-47bb-b6ee-a117c6bc497b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046867958s STEP: Saw pod success Apr 18 00:10:18.851: INFO: Pod "pod-14ed11bc-916e-47bb-b6ee-a117c6bc497b" satisfied condition "Succeeded or Failed" Apr 18 00:10:18.854: INFO: Trying to get logs from node latest-worker pod pod-14ed11bc-916e-47bb-b6ee-a117c6bc497b container test-container: STEP: delete the pod Apr 18 00:10:18.887: INFO: Waiting for pod pod-14ed11bc-916e-47bb-b6ee-a117c6bc497b to disappear Apr 18 00:10:18.891: INFO: Pod pod-14ed11bc-916e-47bb-b6ee-a117c6bc497b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:10:18.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6968" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:10:18.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:10:34.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-397" for this suite. STEP: Destroying namespace "nsdeletetest-2791" for this suite. Apr 18 00:10:34.176: INFO: Namespace nsdeletetest-2791 was already deleted STEP: Destroying namespace "nsdeletetest-5866" for this suite. • [SLOW TEST:15.279 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":106,"skipped":1710,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:10:34.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-d7ce48fd-6860-404f-b6bd-42a6456116d2 STEP: Creating a pod to test consume configMaps Apr 18 00:10:34.276: INFO: Waiting up to 5m0s for pod "pod-configmaps-ecffb481-237b-41f9-a0fb-8eafdea8f78c" in namespace "configmap-619" to be "Succeeded or Failed" Apr 18 00:10:34.284: INFO: Pod "pod-configmaps-ecffb481-237b-41f9-a0fb-8eafdea8f78c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.583714ms Apr 18 00:10:36.287: INFO: Pod "pod-configmaps-ecffb481-237b-41f9-a0fb-8eafdea8f78c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010827923s Apr 18 00:10:38.291: INFO: Pod "pod-configmaps-ecffb481-237b-41f9-a0fb-8eafdea8f78c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015030035s STEP: Saw pod success Apr 18 00:10:38.291: INFO: Pod "pod-configmaps-ecffb481-237b-41f9-a0fb-8eafdea8f78c" satisfied condition "Succeeded or Failed" Apr 18 00:10:38.295: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-ecffb481-237b-41f9-a0fb-8eafdea8f78c container configmap-volume-test: STEP: delete the pod Apr 18 00:10:38.341: INFO: Waiting for pod pod-configmaps-ecffb481-237b-41f9-a0fb-8eafdea8f78c to disappear Apr 18 00:10:38.350: INFO: Pod pod-configmaps-ecffb481-237b-41f9-a0fb-8eafdea8f78c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:10:38.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-619" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1725,"failed":0} SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:10:38.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:11:38.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1042" for this suite. • [SLOW TEST:60.094 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1729,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:11:38.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:11:39.168: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:11:41.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765499, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765499, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765499, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765499, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:11:43.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765499, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765499, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765499, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765499, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:11:46.193: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:11:46.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-53" for this suite. STEP: Destroying namespace "webhook-53-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.901 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":109,"skipped":1749,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:11:46.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4927 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4927 STEP: creating replication controller externalsvc in namespace services-4927 I0418 00:11:46.514600 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4927, replica count: 2 I0418 00:11:49.565056 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0418 00:11:52.565329 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 18 00:11:52.600: INFO: Creating new exec pod Apr 18 00:11:56.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4927 execpoddrcqm -- /bin/sh -x -c nslookup clusterip-service' Apr 18 00:11:59.100: INFO: stderr: "I0418 00:11:58.998274 1394 log.go:172] (0xc00003a630) (0xc0008461e0) Create stream\nI0418 00:11:58.998401 1394 log.go:172] (0xc00003a630) (0xc0008461e0) Stream added, broadcasting: 1\nI0418 00:11:59.001389 1394 log.go:172] (0xc00003a630) Reply frame received for 1\nI0418 00:11:59.001420 1394 log.go:172] (0xc00003a630) (0xc0007b0000) Create stream\nI0418 00:11:59.001426 1394 log.go:172] (0xc00003a630) (0xc0007b0000) Stream added, broadcasting: 3\nI0418 00:11:59.002244 1394 log.go:172] (0xc00003a630) Reply frame received for 3\nI0418 00:11:59.002295 1394 log.go:172] (0xc00003a630) (0xc0007d7220) Create stream\nI0418 00:11:59.002318 1394 log.go:172] (0xc00003a630) (0xc0007d7220) Stream added, broadcasting: 5\nI0418 00:11:59.003203 1394 log.go:172] (0xc00003a630) Reply frame received for 5\nI0418 00:11:59.084809 1394 log.go:172] (0xc00003a630) Data frame received for 5\nI0418 00:11:59.084841 1394 log.go:172] (0xc0007d7220) (5) Data frame handling\nI0418 00:11:59.084857 1394 log.go:172] (0xc0007d7220) (5) Data frame sent\n+ nslookup clusterip-service\nI0418 00:11:59.091258 1394 log.go:172] (0xc00003a630) Data frame received for 3\nI0418 00:11:59.091295 1394 log.go:172] (0xc0007b0000) (3) Data frame handling\nI0418 00:11:59.091323 1394 log.go:172] (0xc0007b0000) (3) Data frame sent\nI0418 00:11:59.092601 1394 log.go:172] (0xc00003a630) Data frame received for 3\nI0418 00:11:59.092618 1394 log.go:172] (0xc0007b0000) (3) Data frame handling\nI0418 00:11:59.092636 1394 log.go:172] (0xc0007b0000) (3) Data frame sent\nI0418 00:11:59.093045 1394 log.go:172] (0xc00003a630) Data frame received for 3\nI0418 00:11:59.093059 1394 log.go:172] (0xc0007b0000) (3) Data frame handling\nI0418 00:11:59.093304 1394 log.go:172] (0xc00003a630) Data frame received for 5\nI0418 00:11:59.093320 1394 log.go:172] (0xc0007d7220) (5) Data frame handling\nI0418 00:11:59.095060 1394 log.go:172] (0xc00003a630) Data frame received for 1\nI0418 00:11:59.095072 1394 log.go:172] (0xc0008461e0) (1) Data frame handling\nI0418 00:11:59.095079 1394 log.go:172] (0xc0008461e0) (1) Data frame sent\nI0418 00:11:59.095087 1394 log.go:172] (0xc00003a630) (0xc0008461e0) Stream removed, broadcasting: 1\nI0418 00:11:59.095096 1394 log.go:172] (0xc00003a630) Go away received\nI0418 00:11:59.095310 1394 log.go:172] (0xc00003a630) (0xc0008461e0) Stream removed, broadcasting: 1\nI0418 00:11:59.095322 1394 log.go:172] (0xc00003a630) (0xc0007b0000) Stream removed, broadcasting: 3\nI0418 00:11:59.095326 1394 log.go:172] (0xc00003a630) (0xc0007d7220) Stream removed, broadcasting: 5\n" Apr 18 00:11:59.100: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4927.svc.cluster.local\tcanonical name = externalsvc.services-4927.svc.cluster.local.\nName:\texternalsvc.services-4927.svc.cluster.local\nAddress: 10.96.84.167\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4927, will wait for the garbage collector to delete the pods Apr 18 00:11:59.172: INFO: Deleting ReplicationController externalsvc took: 17.565077ms Apr 18 00:11:59.272: INFO: Terminating ReplicationController externalsvc pods took: 100.239054ms Apr 18 00:12:12.817: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:12:12.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4927" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:26.504 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":110,"skipped":1782,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:12:12.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-4abbecbe-9fe5-44df-b9af-4c48c2e3177c STEP: Creating a pod to test consume secrets Apr 18 00:12:13.023: INFO: Waiting up to 5m0s for pod "pod-secrets-ca999139-8d87-447d-bc3b-a41ad048b749" in namespace "secrets-1760" to be "Succeeded or Failed" Apr 18 00:12:13.035: INFO: Pod "pod-secrets-ca999139-8d87-447d-bc3b-a41ad048b749": Phase="Pending", Reason="", readiness=false. Elapsed: 11.977883ms Apr 18 00:12:15.039: INFO: Pod "pod-secrets-ca999139-8d87-447d-bc3b-a41ad048b749": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016268903s Apr 18 00:12:17.043: INFO: Pod "pod-secrets-ca999139-8d87-447d-bc3b-a41ad048b749": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020227614s STEP: Saw pod success Apr 18 00:12:17.043: INFO: Pod "pod-secrets-ca999139-8d87-447d-bc3b-a41ad048b749" satisfied condition "Succeeded or Failed" Apr 18 00:12:17.046: INFO: Trying to get logs from node latest-worker pod pod-secrets-ca999139-8d87-447d-bc3b-a41ad048b749 container secret-volume-test: STEP: delete the pod Apr 18 00:12:17.078: INFO: Waiting for pod pod-secrets-ca999139-8d87-447d-bc3b-a41ad048b749 to disappear Apr 18 00:12:17.082: INFO: Pod pod-secrets-ca999139-8d87-447d-bc3b-a41ad048b749 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:12:17.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1760" for this suite. STEP: Destroying namespace "secret-namespace-3823" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1794,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:12:17.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 18 00:12:17.169: INFO: Waiting up to 5m0s for pod "client-containers-f347988e-a265-45a2-89b1-d36535a07f86" in namespace "containers-8806" to be "Succeeded or Failed" Apr 18 00:12:17.201: INFO: Pod "client-containers-f347988e-a265-45a2-89b1-d36535a07f86": Phase="Pending", Reason="", readiness=false. Elapsed: 31.338523ms Apr 18 00:12:19.205: INFO: Pod "client-containers-f347988e-a265-45a2-89b1-d36535a07f86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035312652s Apr 18 00:12:21.209: INFO: Pod "client-containers-f347988e-a265-45a2-89b1-d36535a07f86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039297389s STEP: Saw pod success Apr 18 00:12:21.209: INFO: Pod "client-containers-f347988e-a265-45a2-89b1-d36535a07f86" satisfied condition "Succeeded or Failed" Apr 18 00:12:21.211: INFO: Trying to get logs from node latest-worker2 pod client-containers-f347988e-a265-45a2-89b1-d36535a07f86 container test-container: STEP: delete the pod Apr 18 00:12:21.237: INFO: Waiting for pod client-containers-f347988e-a265-45a2-89b1-d36535a07f86 to disappear Apr 18 00:12:21.247: INFO: Pod client-containers-f347988e-a265-45a2-89b1-d36535a07f86 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:12:21.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8806" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1814,"failed":0} SS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:12:21.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 18 00:12:21.388: INFO: Created pod &Pod{ObjectMeta:{dns-2082 dns-2082 /api/v1/namespaces/dns-2082/pods/dns-2082 4b23d2e7-7084-4c64-aef9-391c0bdbde95 8930887 0 2020-04-18 00:12:21 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7x9sz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7x9sz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7x9sz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:12:21.392: INFO: The status of Pod dns-2082 is Pending, waiting for it to be Running (with Ready = true) Apr 18 00:12:23.396: INFO: The status of Pod dns-2082 is Pending, waiting for it to be Running (with Ready = true) Apr 18 00:12:25.396: INFO: The status of Pod dns-2082 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 18 00:12:25.396: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2082 PodName:dns-2082 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:12:25.396: INFO: >>> kubeConfig: /root/.kube/config I0418 00:12:25.427377 7 log.go:172] (0xc0024b4630) (0xc0017d4320) Create stream I0418 00:12:25.427403 7 log.go:172] (0xc0024b4630) (0xc0017d4320) Stream added, broadcasting: 1 I0418 00:12:25.429794 7 log.go:172] (0xc0024b4630) Reply frame received for 1 I0418 00:12:25.429847 7 log.go:172] (0xc0024b4630) (0xc0019030e0) Create stream I0418 00:12:25.429860 7 log.go:172] (0xc0024b4630) (0xc0019030e0) Stream added, broadcasting: 3 I0418 00:12:25.431128 7 log.go:172] (0xc0024b4630) Reply frame received for 3 I0418 00:12:25.431170 7 log.go:172] (0xc0024b4630) (0xc000ce5ea0) Create stream I0418 00:12:25.431184 7 log.go:172] (0xc0024b4630) (0xc000ce5ea0) Stream added, broadcasting: 5 I0418 00:12:25.432231 7 log.go:172] (0xc0024b4630) Reply frame received for 5 I0418 00:12:25.521204 7 log.go:172] (0xc0024b4630) Data frame received for 5 I0418 00:12:25.521250 7 log.go:172] (0xc000ce5ea0) (5) Data frame handling I0418 00:12:25.521300 7 log.go:172] (0xc0024b4630) Data frame received for 3 I0418 00:12:25.521347 7 log.go:172] (0xc0019030e0) (3) Data frame handling I0418 00:12:25.521391 7 log.go:172] (0xc0019030e0) (3) Data frame sent I0418 00:12:25.521524 7 log.go:172] (0xc0024b4630) Data frame received for 3 I0418 00:12:25.521549 7 log.go:172] (0xc0019030e0) (3) Data frame handling I0418 00:12:25.523306 7 log.go:172] (0xc0024b4630) Data frame received for 1 I0418 00:12:25.523323 7 log.go:172] (0xc0017d4320) (1) Data frame handling I0418 00:12:25.523330 7 log.go:172] (0xc0017d4320) (1) Data frame sent I0418 00:12:25.523465 7 log.go:172] (0xc0024b4630) (0xc0017d4320) Stream removed, broadcasting: 1 I0418 00:12:25.523620 7 log.go:172] (0xc0024b4630) (0xc0017d4320) Stream removed, broadcasting: 1 I0418 00:12:25.523639 7 log.go:172] (0xc0024b4630) Go away received I0418 00:12:25.523655 7 log.go:172] (0xc0024b4630) (0xc0019030e0) Stream removed, broadcasting: 3 I0418 00:12:25.523666 7 log.go:172] (0xc0024b4630) (0xc000ce5ea0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 18 00:12:25.523: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2082 PodName:dns-2082 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:12:25.523: INFO: >>> kubeConfig: /root/.kube/config I0418 00:12:25.559019 7 log.go:172] (0xc002ccca50) (0xc000ddda40) Create stream I0418 00:12:25.559057 7 log.go:172] (0xc002ccca50) (0xc000ddda40) Stream added, broadcasting: 1 I0418 00:12:25.561871 7 log.go:172] (0xc002ccca50) Reply frame received for 1 I0418 00:12:25.561912 7 log.go:172] (0xc002ccca50) (0xc0011521e0) Create stream I0418 00:12:25.561924 7 log.go:172] (0xc002ccca50) (0xc0011521e0) Stream added, broadcasting: 3 I0418 00:12:25.562897 7 log.go:172] (0xc002ccca50) Reply frame received for 3 I0418 00:12:25.562938 7 log.go:172] (0xc002ccca50) (0xc000dddb80) Create stream I0418 00:12:25.562953 7 log.go:172] (0xc002ccca50) (0xc000dddb80) Stream added, broadcasting: 5 I0418 00:12:25.563849 7 log.go:172] (0xc002ccca50) Reply frame received for 5 I0418 00:12:25.651868 7 log.go:172] (0xc002ccca50) Data frame received for 3 I0418 00:12:25.651919 7 log.go:172] (0xc0011521e0) (3) Data frame handling I0418 00:12:25.651948 7 log.go:172] (0xc0011521e0) (3) Data frame sent I0418 00:12:25.653444 7 log.go:172] (0xc002ccca50) Data frame received for 3 I0418 00:12:25.653489 7 log.go:172] (0xc0011521e0) (3) Data frame handling I0418 00:12:25.653627 7 log.go:172] (0xc002ccca50) Data frame received for 5 I0418 00:12:25.653654 7 log.go:172] (0xc000dddb80) (5) Data frame handling I0418 00:12:25.655529 7 log.go:172] (0xc002ccca50) Data frame received for 1 I0418 00:12:25.655619 7 log.go:172] (0xc000ddda40) (1) Data frame handling I0418 00:12:25.655667 7 log.go:172] (0xc000ddda40) (1) Data frame sent I0418 00:12:25.655698 7 log.go:172] (0xc002ccca50) (0xc000ddda40) Stream removed, broadcasting: 1 I0418 00:12:25.655733 7 log.go:172] (0xc002ccca50) Go away received I0418 00:12:25.655882 7 log.go:172] (0xc002ccca50) (0xc000ddda40) Stream removed, broadcasting: 1 I0418 00:12:25.655923 7 log.go:172] (0xc002ccca50) (0xc0011521e0) Stream removed, broadcasting: 3 I0418 00:12:25.655951 7 log.go:172] (0xc002ccca50) (0xc000dddb80) Stream removed, broadcasting: 5 Apr 18 00:12:25.655: INFO: Deleting pod dns-2082... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:12:25.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2082" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":113,"skipped":1816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:12:25.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0418 00:13:06.020267 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 18 00:13:06.020: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:13:06.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-804" for this suite. • [SLOW TEST:40.288 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":114,"skipped":1847,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:13:06.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9443 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9443 I0418 00:13:06.729603 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9443, replica count: 2 I0418 00:13:09.780017 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0418 00:13:12.780207 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 18 00:13:12.780: INFO: Creating new exec pod Apr 18 00:13:18.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9443 execpodhtzrr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 18 00:13:18.235: INFO: stderr: "I0418 00:13:18.148861 1428 log.go:172] (0xc000c96000) (0xc000821360) Create stream\nI0418 00:13:18.148925 1428 log.go:172] (0xc000c96000) (0xc000821360) Stream added, broadcasting: 1\nI0418 00:13:18.150814 1428 log.go:172] (0xc000c96000) Reply frame received for 1\nI0418 00:13:18.150866 1428 log.go:172] (0xc000c96000) (0xc000821400) Create stream\nI0418 00:13:18.150880 1428 log.go:172] (0xc000c96000) (0xc000821400) Stream added, broadcasting: 3\nI0418 00:13:18.151751 1428 log.go:172] (0xc000c96000) Reply frame received for 3\nI0418 00:13:18.151786 1428 log.go:172] (0xc000c96000) (0xc0008214a0) Create stream\nI0418 00:13:18.151797 1428 log.go:172] (0xc000c96000) (0xc0008214a0) Stream added, broadcasting: 5\nI0418 00:13:18.152820 1428 log.go:172] (0xc000c96000) Reply frame received for 5\nI0418 00:13:18.227202 1428 log.go:172] (0xc000c96000) Data frame received for 5\nI0418 00:13:18.227232 1428 log.go:172] (0xc0008214a0) (5) Data frame handling\nI0418 00:13:18.227245 1428 log.go:172] (0xc0008214a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0418 00:13:18.227580 1428 log.go:172] (0xc000c96000) Data frame received for 5\nI0418 00:13:18.227611 1428 log.go:172] (0xc0008214a0) (5) Data frame handling\nI0418 00:13:18.227636 1428 log.go:172] (0xc0008214a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0418 00:13:18.227866 1428 log.go:172] (0xc000c96000) Data frame received for 3\nI0418 00:13:18.227891 1428 log.go:172] (0xc000821400) (3) Data frame handling\nI0418 00:13:18.227913 1428 log.go:172] (0xc000c96000) Data frame received for 5\nI0418 00:13:18.227923 1428 log.go:172] (0xc0008214a0) (5) Data frame handling\nI0418 00:13:18.229962 1428 log.go:172] (0xc000c96000) Data frame received for 1\nI0418 00:13:18.229983 1428 log.go:172] (0xc000821360) (1) Data frame handling\nI0418 00:13:18.230005 1428 log.go:172] (0xc000821360) (1) Data frame sent\nI0418 00:13:18.230020 1428 log.go:172] (0xc000c96000) (0xc000821360) Stream removed, broadcasting: 1\nI0418 00:13:18.230088 1428 log.go:172] (0xc000c96000) Go away received\nI0418 00:13:18.230407 1428 log.go:172] (0xc000c96000) (0xc000821360) Stream removed, broadcasting: 1\nI0418 00:13:18.230432 1428 log.go:172] (0xc000c96000) (0xc000821400) Stream removed, broadcasting: 3\nI0418 00:13:18.230446 1428 log.go:172] (0xc000c96000) (0xc0008214a0) Stream removed, broadcasting: 5\n" Apr 18 00:13:18.236: INFO: stdout: "" Apr 18 00:13:18.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9443 execpodhtzrr -- /bin/sh -x -c nc -zv -t -w 2 10.96.78.48 80' Apr 18 00:13:18.448: INFO: stderr: "I0418 00:13:18.373935 1448 log.go:172] (0xc00003aa50) (0xc0006af5e0) Create stream\nI0418 00:13:18.374031 1448 log.go:172] (0xc00003aa50) (0xc0006af5e0) Stream added, broadcasting: 1\nI0418 00:13:18.377329 1448 log.go:172] (0xc00003aa50) Reply frame received for 1\nI0418 00:13:18.377373 1448 log.go:172] (0xc00003aa50) (0xc000a56000) Create stream\nI0418 00:13:18.377384 1448 log.go:172] (0xc00003aa50) (0xc000a56000) Stream added, broadcasting: 3\nI0418 00:13:18.378432 1448 log.go:172] (0xc00003aa50) Reply frame received for 3\nI0418 00:13:18.378466 1448 log.go:172] (0xc00003aa50) (0xc000a560a0) Create stream\nI0418 00:13:18.378474 1448 log.go:172] (0xc00003aa50) (0xc000a560a0) Stream added, broadcasting: 5\nI0418 00:13:18.379441 1448 log.go:172] (0xc00003aa50) Reply frame received for 5\nI0418 00:13:18.441951 1448 log.go:172] (0xc00003aa50) Data frame received for 3\nI0418 00:13:18.441984 1448 log.go:172] (0xc000a56000) (3) Data frame handling\nI0418 00:13:18.442010 1448 log.go:172] (0xc00003aa50) Data frame received for 5\nI0418 00:13:18.442018 1448 log.go:172] (0xc000a560a0) (5) Data frame handling\nI0418 00:13:18.442026 1448 log.go:172] (0xc000a560a0) (5) Data frame sent\nI0418 00:13:18.442032 1448 log.go:172] (0xc00003aa50) Data frame received for 5\nI0418 00:13:18.442038 1448 log.go:172] (0xc000a560a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.78.48 80\nConnection to 10.96.78.48 80 port [tcp/http] succeeded!\nI0418 00:13:18.443364 1448 log.go:172] (0xc00003aa50) Data frame received for 1\nI0418 00:13:18.443404 1448 log.go:172] (0xc0006af5e0) (1) Data frame handling\nI0418 00:13:18.443439 1448 log.go:172] (0xc0006af5e0) (1) Data frame sent\nI0418 00:13:18.443475 1448 log.go:172] (0xc00003aa50) (0xc0006af5e0) Stream removed, broadcasting: 1\nI0418 00:13:18.443523 1448 log.go:172] (0xc00003aa50) Go away received\nI0418 00:13:18.444014 1448 log.go:172] (0xc00003aa50) (0xc0006af5e0) Stream removed, broadcasting: 1\nI0418 00:13:18.444038 1448 log.go:172] (0xc00003aa50) (0xc000a56000) Stream removed, broadcasting: 3\nI0418 00:13:18.444048 1448 log.go:172] (0xc00003aa50) (0xc000a560a0) Stream removed, broadcasting: 5\n" Apr 18 00:13:18.448: INFO: stdout: "" Apr 18 00:13:18.448: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:13:18.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9443" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.453 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":115,"skipped":1858,"failed":0} [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:13:18.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-71e0891e-114f-4657-82d3-7c13d8f70772 STEP: Creating a pod to test consume configMaps Apr 18 00:13:18.563: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5b251d0-ddf0-49c6-a5e0-7cd26187e170" in namespace "projected-939" to be "Succeeded or Failed" Apr 18 00:13:18.578: INFO: Pod "pod-projected-configmaps-b5b251d0-ddf0-49c6-a5e0-7cd26187e170": Phase="Pending", Reason="", readiness=false. Elapsed: 14.743182ms Apr 18 00:13:20.581: INFO: Pod "pod-projected-configmaps-b5b251d0-ddf0-49c6-a5e0-7cd26187e170": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017971077s Apr 18 00:13:22.585: INFO: Pod "pod-projected-configmaps-b5b251d0-ddf0-49c6-a5e0-7cd26187e170": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022328824s STEP: Saw pod success Apr 18 00:13:22.585: INFO: Pod "pod-projected-configmaps-b5b251d0-ddf0-49c6-a5e0-7cd26187e170" satisfied condition "Succeeded or Failed" Apr 18 00:13:22.588: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b5b251d0-ddf0-49c6-a5e0-7cd26187e170 container projected-configmap-volume-test: STEP: delete the pod Apr 18 00:13:22.622: INFO: Waiting for pod pod-projected-configmaps-b5b251d0-ddf0-49c6-a5e0-7cd26187e170 to disappear Apr 18 00:13:22.632: INFO: Pod pod-projected-configmaps-b5b251d0-ddf0-49c6-a5e0-7cd26187e170 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:13:22.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-939" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1858,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:13:22.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-526.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-526.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-526.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-526.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-526.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-526.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 18 00:13:28.792: INFO: DNS probes using dns-526/dns-test-85598dcb-9b4a-4aeb-90fb-43b415d09995 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:13:28.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-526" for this suite. • [SLOW TEST:6.276 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":117,"skipped":1868,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:13:28.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:13:29.123: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-411d38ee-c438-442b-a5cb-990226e33fff" in namespace "security-context-test-6159" to be "Succeeded or Failed" Apr 18 00:13:29.205: INFO: Pod "busybox-readonly-false-411d38ee-c438-442b-a5cb-990226e33fff": Phase="Pending", Reason="", readiness=false. Elapsed: 81.935194ms Apr 18 00:13:31.241: INFO: Pod "busybox-readonly-false-411d38ee-c438-442b-a5cb-990226e33fff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117597633s Apr 18 00:13:33.246: INFO: Pod "busybox-readonly-false-411d38ee-c438-442b-a5cb-990226e33fff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122226232s Apr 18 00:13:33.246: INFO: Pod "busybox-readonly-false-411d38ee-c438-442b-a5cb-990226e33fff" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:13:33.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6159" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":1884,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:13:33.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-5dc80d99-5575-4384-b42d-1d61b78c97fc in namespace container-probe-2703 Apr 18 00:13:37.325: INFO: Started pod busybox-5dc80d99-5575-4384-b42d-1d61b78c97fc in namespace container-probe-2703 STEP: checking the pod's current state and verifying that restartCount is present Apr 18 00:13:37.329: INFO: Initial restart count of pod busybox-5dc80d99-5575-4384-b42d-1d61b78c97fc is 0 Apr 18 00:14:25.527: INFO: Restart count of pod container-probe-2703/busybox-5dc80d99-5575-4384-b42d-1d61b78c97fc is now 1 (48.198230689s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:14:25.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2703" for this suite. • [SLOW TEST:52.350 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":1899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:14:25.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:14:26.395: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:14:28.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765666, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765666, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765666, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765666, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:14:31.482: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:14:31.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7176" for this suite. STEP: Destroying namespace "webhook-7176-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.003 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":120,"skipped":1941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:14:31.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 18 00:14:31.692: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:31.714: INFO: Number of nodes with available pods: 0 Apr 18 00:14:31.714: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:14:32.739: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:32.742: INFO: Number of nodes with available pods: 0 Apr 18 00:14:32.742: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:14:33.718: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:33.721: INFO: Number of nodes with available pods: 0 Apr 18 00:14:33.721: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:14:34.718: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:34.721: INFO: Number of nodes with available pods: 0 Apr 18 00:14:34.721: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:14:35.718: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:35.721: INFO: Number of nodes with available pods: 2 Apr 18 00:14:35.721: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 18 00:14:35.736: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:35.738: INFO: Number of nodes with available pods: 1 Apr 18 00:14:35.738: INFO: Node latest-worker2 is running more than one daemon pod Apr 18 00:14:36.756: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:36.760: INFO: Number of nodes with available pods: 1 Apr 18 00:14:36.760: INFO: Node latest-worker2 is running more than one daemon pod Apr 18 00:14:37.778: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:37.782: INFO: Number of nodes with available pods: 1 Apr 18 00:14:37.782: INFO: Node latest-worker2 is running more than one daemon pod Apr 18 00:14:38.743: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:38.746: INFO: Number of nodes with available pods: 1 Apr 18 00:14:38.746: INFO: Node latest-worker2 is running more than one daemon pod Apr 18 00:14:39.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:39.747: INFO: Number of nodes with available pods: 1 Apr 18 00:14:39.747: INFO: Node latest-worker2 is running more than one daemon pod Apr 18 00:14:40.743: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:40.793: INFO: Number of nodes with available pods: 1 Apr 18 00:14:40.793: INFO: Node latest-worker2 is running more than one daemon pod Apr 18 00:14:41.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:41.748: INFO: Number of nodes with available pods: 1 Apr 18 00:14:41.748: INFO: Node latest-worker2 is running more than one daemon pod Apr 18 00:14:42.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:14:42.748: INFO: Number of nodes with available pods: 2 Apr 18 00:14:42.748: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2208, will wait for the garbage collector to delete the pods Apr 18 00:14:42.811: INFO: Deleting DaemonSet.extensions daemon-set took: 6.590844ms Apr 18 00:14:42.911: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.259436ms Apr 18 00:14:53.014: INFO: Number of nodes with available pods: 0 Apr 18 00:14:53.014: INFO: Number of running nodes: 0, number of available pods: 0 Apr 18 00:14:53.021: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2208/daemonsets","resourceVersion":"8931880"},"items":null} Apr 18 00:14:53.024: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2208/pods","resourceVersion":"8931880"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:14:53.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2208" for this suite. • [SLOW TEST:21.431 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":121,"skipped":1967,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:14:53.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:14:53.107: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7379736c-e430-4844-b0af-af0fc90d8878" in namespace "projected-5765" to be "Succeeded or Failed" Apr 18 00:14:53.111: INFO: Pod "downwardapi-volume-7379736c-e430-4844-b0af-af0fc90d8878": Phase="Pending", Reason="", readiness=false. Elapsed: 3.260286ms Apr 18 00:14:55.114: INFO: Pod "downwardapi-volume-7379736c-e430-4844-b0af-af0fc90d8878": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00713776s Apr 18 00:14:57.119: INFO: Pod "downwardapi-volume-7379736c-e430-4844-b0af-af0fc90d8878": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0114383s STEP: Saw pod success Apr 18 00:14:57.119: INFO: Pod "downwardapi-volume-7379736c-e430-4844-b0af-af0fc90d8878" satisfied condition "Succeeded or Failed" Apr 18 00:14:57.122: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7379736c-e430-4844-b0af-af0fc90d8878 container client-container: STEP: delete the pod Apr 18 00:14:57.148: INFO: Waiting for pod downwardapi-volume-7379736c-e430-4844-b0af-af0fc90d8878 to disappear Apr 18 00:14:57.194: INFO: Pod downwardapi-volume-7379736c-e430-4844-b0af-af0fc90d8878 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:14:57.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5765" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":122,"skipped":1969,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:14:57.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:14:57.793: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:14:59.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765697, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765697, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765697, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765697, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:15:02.832: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:15:02.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8001-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:15:03.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5632" for this suite. STEP: Destroying namespace "webhook-5632-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.856 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":123,"skipped":1984,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:15:04.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-03d694f7-e255-480a-9225-53074e85e26f STEP: Creating a pod to test consume secrets Apr 18 00:15:04.149: INFO: Waiting up to 5m0s for pod "pod-secrets-417e1d52-0f5e-4745-8a3c-c3425452e19a" in namespace "secrets-6522" to be "Succeeded or Failed" Apr 18 00:15:04.154: INFO: Pod "pod-secrets-417e1d52-0f5e-4745-8a3c-c3425452e19a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322839ms Apr 18 00:15:06.158: INFO: Pod "pod-secrets-417e1d52-0f5e-4745-8a3c-c3425452e19a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008845881s Apr 18 00:15:08.162: INFO: Pod "pod-secrets-417e1d52-0f5e-4745-8a3c-c3425452e19a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012964309s STEP: Saw pod success Apr 18 00:15:08.162: INFO: Pod "pod-secrets-417e1d52-0f5e-4745-8a3c-c3425452e19a" satisfied condition "Succeeded or Failed" Apr 18 00:15:08.165: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-417e1d52-0f5e-4745-8a3c-c3425452e19a container secret-volume-test: STEP: delete the pod Apr 18 00:15:08.201: INFO: Waiting for pod pod-secrets-417e1d52-0f5e-4745-8a3c-c3425452e19a to disappear Apr 18 00:15:08.214: INFO: Pod pod-secrets-417e1d52-0f5e-4745-8a3c-c3425452e19a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:15:08.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6522" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2029,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:15:08.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:15:15.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8826" for this suite. • [SLOW TEST:7.058 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":125,"skipped":2052,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:15:15.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 18 00:15:19.854: INFO: Successfully updated pod "pod-update-a5e0cb36-433d-4165-bd00-2222f89784d6" STEP: verifying the updated pod is in kubernetes Apr 18 00:15:19.860: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:15:19.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6372" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2057,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:15:19.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:15:26.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2236" for this suite. STEP: Destroying namespace "nsdeletetest-88" for this suite. Apr 18 00:15:26.072: INFO: Namespace nsdeletetest-88 was already deleted STEP: Destroying namespace "nsdeletetest-7616" for this suite. • [SLOW TEST:6.192 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":127,"skipped":2073,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:15:26.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 18 00:15:26.172: INFO: Waiting up to 5m0s for pod "pod-1b1390eb-7a8c-47fb-97a2-7a530e43ca32" in namespace "emptydir-8876" to be "Succeeded or Failed" Apr 18 00:15:26.190: INFO: Pod "pod-1b1390eb-7a8c-47fb-97a2-7a530e43ca32": Phase="Pending", Reason="", readiness=false. Elapsed: 18.396794ms Apr 18 00:15:28.249: INFO: Pod "pod-1b1390eb-7a8c-47fb-97a2-7a530e43ca32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076927925s Apr 18 00:15:30.256: INFO: Pod "pod-1b1390eb-7a8c-47fb-97a2-7a530e43ca32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084418625s STEP: Saw pod success Apr 18 00:15:30.256: INFO: Pod "pod-1b1390eb-7a8c-47fb-97a2-7a530e43ca32" satisfied condition "Succeeded or Failed" Apr 18 00:15:30.259: INFO: Trying to get logs from node latest-worker2 pod pod-1b1390eb-7a8c-47fb-97a2-7a530e43ca32 container test-container: STEP: delete the pod Apr 18 00:15:30.290: INFO: Waiting for pod pod-1b1390eb-7a8c-47fb-97a2-7a530e43ca32 to disappear Apr 18 00:15:30.298: INFO: Pod pod-1b1390eb-7a8c-47fb-97a2-7a530e43ca32 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:15:30.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8876" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2090,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:15:30.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 18 00:15:30.901: INFO: created pod pod-service-account-defaultsa Apr 18 00:15:30.901: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 18 00:15:30.934: INFO: created pod pod-service-account-mountsa Apr 18 00:15:30.934: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 18 00:15:30.961: INFO: created pod pod-service-account-nomountsa Apr 18 00:15:30.961: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 18 00:15:30.977: INFO: created pod pod-service-account-defaultsa-mountspec Apr 18 00:15:30.977: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 18 00:15:30.997: INFO: created pod pod-service-account-mountsa-mountspec Apr 18 00:15:30.997: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 18 00:15:31.052: INFO: created pod pod-service-account-nomountsa-mountspec Apr 18 00:15:31.052: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 18 00:15:31.086: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 18 00:15:31.086: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 18 00:15:31.111: INFO: created pod pod-service-account-mountsa-nomountspec Apr 18 00:15:31.111: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 18 00:15:31.143: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 18 00:15:31.143: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:15:31.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1769" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":129,"skipped":2124,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:15:31.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:15:31.335: INFO: Creating ReplicaSet my-hostname-basic-07306ef9-8539-414f-abb6-c6f94d8ffe55 Apr 18 00:15:31.353: INFO: Pod name my-hostname-basic-07306ef9-8539-414f-abb6-c6f94d8ffe55: Found 0 pods out of 1 Apr 18 00:15:36.567: INFO: Pod name my-hostname-basic-07306ef9-8539-414f-abb6-c6f94d8ffe55: Found 1 pods out of 1 Apr 18 00:15:36.567: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-07306ef9-8539-414f-abb6-c6f94d8ffe55" is running Apr 18 00:15:42.612: INFO: Pod "my-hostname-basic-07306ef9-8539-414f-abb6-c6f94d8ffe55-f9wl5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-18 00:15:31 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-18 00:15:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-07306ef9-8539-414f-abb6-c6f94d8ffe55]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-18 00:15:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-07306ef9-8539-414f-abb6-c6f94d8ffe55]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-18 00:15:31 +0000 UTC Reason: Message:}]) Apr 18 00:15:42.612: INFO: Trying to dial the pod Apr 18 00:15:47.624: INFO: Controller my-hostname-basic-07306ef9-8539-414f-abb6-c6f94d8ffe55: Got expected result from replica 1 [my-hostname-basic-07306ef9-8539-414f-abb6-c6f94d8ffe55-f9wl5]: "my-hostname-basic-07306ef9-8539-414f-abb6-c6f94d8ffe55-f9wl5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:15:47.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3343" for this suite. • [SLOW TEST:16.423 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":130,"skipped":2128,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:15:47.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:16:00.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4903" for this suite. • [SLOW TEST:13.221 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":131,"skipped":2133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:16:00.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 18 00:16:00.931: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 00:16:00.941: INFO: Waiting for terminating namespaces to be deleted... Apr 18 00:16:00.943: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 18 00:16:00.959: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:16:00.959: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 00:16:00.959: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:16:00.959: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 00:16:00.959: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 18 00:16:00.965: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:16:00.965: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 00:16:00.965: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:16:00.965: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1606c1c2c5b91993], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:16:01.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-910" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":132,"skipped":2204,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:16:01.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 18 00:16:02.071: INFO: Waiting up to 5m0s for pod "pod-1322f8e6-adf3-4946-bdf7-591d54d2bbf8" in namespace "emptydir-4216" to be "Succeeded or Failed" Apr 18 00:16:02.094: INFO: Pod "pod-1322f8e6-adf3-4946-bdf7-591d54d2bbf8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.379774ms Apr 18 00:16:04.099: INFO: Pod "pod-1322f8e6-adf3-4946-bdf7-591d54d2bbf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027634713s Apr 18 00:16:06.102: INFO: Pod "pod-1322f8e6-adf3-4946-bdf7-591d54d2bbf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03109646s STEP: Saw pod success Apr 18 00:16:06.102: INFO: Pod "pod-1322f8e6-adf3-4946-bdf7-591d54d2bbf8" satisfied condition "Succeeded or Failed" Apr 18 00:16:06.105: INFO: Trying to get logs from node latest-worker pod pod-1322f8e6-adf3-4946-bdf7-591d54d2bbf8 container test-container: STEP: delete the pod Apr 18 00:16:06.118: INFO: Waiting for pod pod-1322f8e6-adf3-4946-bdf7-591d54d2bbf8 to disappear Apr 18 00:16:06.121: INFO: Pod pod-1322f8e6-adf3-4946-bdf7-591d54d2bbf8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:16:06.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4216" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2206,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:16:06.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:16:06.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e0c75e7-f7f3-4229-8141-8296d6a91a3b" in namespace "projected-4079" to be "Succeeded or Failed" Apr 18 00:16:06.287: INFO: Pod "downwardapi-volume-2e0c75e7-f7f3-4229-8141-8296d6a91a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.79384ms Apr 18 00:16:08.291: INFO: Pod "downwardapi-volume-2e0c75e7-f7f3-4229-8141-8296d6a91a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020536206s Apr 18 00:16:10.295: INFO: Pod "downwardapi-volume-2e0c75e7-f7f3-4229-8141-8296d6a91a3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024886896s STEP: Saw pod success Apr 18 00:16:10.295: INFO: Pod "downwardapi-volume-2e0c75e7-f7f3-4229-8141-8296d6a91a3b" satisfied condition "Succeeded or Failed" Apr 18 00:16:10.298: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2e0c75e7-f7f3-4229-8141-8296d6a91a3b container client-container: STEP: delete the pod Apr 18 00:16:10.329: INFO: Waiting for pod downwardapi-volume-2e0c75e7-f7f3-4229-8141-8296d6a91a3b to disappear Apr 18 00:16:10.338: INFO: Pod downwardapi-volume-2e0c75e7-f7f3-4229-8141-8296d6a91a3b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:16:10.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4079" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2213,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:16:10.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:16:10.447: INFO: Create a RollingUpdate DaemonSet Apr 18 00:16:10.451: INFO: Check that daemon pods launch on every node of the cluster Apr 18 00:16:10.466: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:16:10.468: INFO: Number of nodes with available pods: 0 Apr 18 00:16:10.468: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:11.472: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:16:11.474: INFO: Number of nodes with available pods: 0 Apr 18 00:16:11.474: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:12.607: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:16:12.648: INFO: Number of nodes with available pods: 0 Apr 18 00:16:12.648: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:13.477: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:16:13.480: INFO: Number of nodes with available pods: 0 Apr 18 00:16:13.480: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:14.471: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:16:14.477: INFO: Number of nodes with available pods: 1 Apr 18 00:16:14.477: INFO: Node latest-worker2 is running more than one daemon pod Apr 18 00:16:15.475: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:16:15.480: INFO: Number of nodes with available pods: 2 Apr 18 00:16:15.480: INFO: Number of running nodes: 2, number of available pods: 2 Apr 18 00:16:15.480: INFO: Update the DaemonSet to trigger a rollout Apr 18 00:16:15.486: INFO: Updating DaemonSet daemon-set Apr 18 00:16:23.501: INFO: Roll back the DaemonSet before rollout is complete Apr 18 00:16:23.507: INFO: Updating DaemonSet daemon-set Apr 18 00:16:23.507: INFO: Make sure DaemonSet rollback is complete Apr 18 00:16:23.527: INFO: Wrong image for pod: daemon-set-ktdng. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 18 00:16:23.527: INFO: Pod daemon-set-ktdng is not available Apr 18 00:16:23.536: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:16:24.540: INFO: Wrong image for pod: daemon-set-ktdng. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 18 00:16:24.540: INFO: Pod daemon-set-ktdng is not available Apr 18 00:16:24.554: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:16:25.681: INFO: Wrong image for pod: daemon-set-ktdng. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 18 00:16:25.681: INFO: Pod daemon-set-ktdng is not available Apr 18 00:16:25.685: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:16:26.541: INFO: Pod daemon-set-55ch2 is not available Apr 18 00:16:26.545: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3956, will wait for the garbage collector to delete the pods Apr 18 00:16:26.612: INFO: Deleting DaemonSet.extensions daemon-set took: 6.394463ms Apr 18 00:16:26.912: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.221363ms Apr 18 00:16:32.815: INFO: Number of nodes with available pods: 0 Apr 18 00:16:32.815: INFO: Number of running nodes: 0, number of available pods: 0 Apr 18 00:16:32.818: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3956/daemonsets","resourceVersion":"8932708"},"items":null} Apr 18 00:16:32.820: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3956/pods","resourceVersion":"8932708"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:16:32.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3956" for this suite. • [SLOW TEST:22.491 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":135,"skipped":2226,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:16:32.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:16:36.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7917" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:16:36.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 18 00:16:37.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 18 00:16:37.234: INFO: stderr: "" Apr 18 00:16:37.234: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:16:37.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8537" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":137,"skipped":2269,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:16:37.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 18 00:16:37.320: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 00:16:37.369: INFO: Waiting for terminating namespaces to be deleted... Apr 18 00:16:37.372: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 18 00:16:37.376: INFO: busybox-host-aliasesdcdf81fd-a16e-485e-95d7-eb3e0dfd7ab7 from kubelet-test-7917 started at 2020-04-18 00:16:32 +0000 UTC (1 container statuses recorded) Apr 18 00:16:37.376: INFO: Container busybox-host-aliasesdcdf81fd-a16e-485e-95d7-eb3e0dfd7ab7 ready: true, restart count 0 Apr 18 00:16:37.376: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:16:37.376: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 00:16:37.376: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:16:37.376: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 00:16:37.376: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 18 00:16:37.380: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:16:37.380: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 00:16:37.380: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:16:37.380: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 18 00:16:37.446: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 18 00:16:37.446: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 18 00:16:37.446: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 18 00:16:37.447: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker Apr 18 00:16:37.447: INFO: Pod busybox-host-aliasesdcdf81fd-a16e-485e-95d7-eb3e0dfd7ab7 requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 18 00:16:37.447: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 18 00:16:37.452: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-406c4911-d9dd-48e7-b0b9-d68bf83e177b.1606c1cb43ff4615], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2699/filler-pod-406c4911-d9dd-48e7-b0b9-d68bf83e177b to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-406c4911-d9dd-48e7-b0b9-d68bf83e177b.1606c1cb909379dd], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-406c4911-d9dd-48e7-b0b9-d68bf83e177b.1606c1cbdf874bfd], Reason = [Created], Message = [Created container filler-pod-406c4911-d9dd-48e7-b0b9-d68bf83e177b] STEP: Considering event: Type = [Normal], Name = [filler-pod-406c4911-d9dd-48e7-b0b9-d68bf83e177b.1606c1cbf283cc66], Reason = [Started], Message = [Started container filler-pod-406c4911-d9dd-48e7-b0b9-d68bf83e177b] STEP: Considering event: Type = [Normal], Name = [filler-pod-9e9c6ff7-c9e1-4b7b-a3ab-885e8ac799f1.1606c1cb45e0faf3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2699/filler-pod-9e9c6ff7-c9e1-4b7b-a3ab-885e8ac799f1 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-9e9c6ff7-c9e1-4b7b-a3ab-885e8ac799f1.1606c1cbd0cc3c53], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9e9c6ff7-c9e1-4b7b-a3ab-885e8ac799f1.1606c1cbfc2f3f16], Reason = [Created], Message = [Created container filler-pod-9e9c6ff7-c9e1-4b7b-a3ab-885e8ac799f1] STEP: Considering event: Type = [Normal], Name = [filler-pod-9e9c6ff7-c9e1-4b7b-a3ab-885e8ac799f1.1606c1cc0b35067d], Reason = [Started], Message = [Started container filler-pod-9e9c6ff7-c9e1-4b7b-a3ab-885e8ac799f1] STEP: Considering event: Type = [Warning], Name = [additional-pod.1606c1ccacd4098e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:16:44.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2699" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:7.365 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":138,"skipped":2276,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:16:44.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:16:44.695: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 18 00:16:44.708: INFO: Number of nodes with available pods: 0 Apr 18 00:16:44.708: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 18 00:16:44.754: INFO: Number of nodes with available pods: 0 Apr 18 00:16:44.754: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:45.758: INFO: Number of nodes with available pods: 0 Apr 18 00:16:45.758: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:46.758: INFO: Number of nodes with available pods: 0 Apr 18 00:16:46.759: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:47.758: INFO: Number of nodes with available pods: 1 Apr 18 00:16:47.758: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 18 00:16:47.795: INFO: Number of nodes with available pods: 1 Apr 18 00:16:47.795: INFO: Number of running nodes: 0, number of available pods: 1 Apr 18 00:16:48.800: INFO: Number of nodes with available pods: 0 Apr 18 00:16:48.800: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 18 00:16:48.810: INFO: Number of nodes with available pods: 0 Apr 18 00:16:48.810: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:49.836: INFO: Number of nodes with available pods: 0 Apr 18 00:16:49.836: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:50.814: INFO: Number of nodes with available pods: 0 Apr 18 00:16:50.815: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:51.814: INFO: Number of nodes with available pods: 0 Apr 18 00:16:51.814: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:52.831: INFO: Number of nodes with available pods: 0 Apr 18 00:16:52.831: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:53.815: INFO: Number of nodes with available pods: 0 Apr 18 00:16:53.815: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:54.813: INFO: Number of nodes with available pods: 0 Apr 18 00:16:54.813: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:16:55.814: INFO: Number of nodes with available pods: 1 Apr 18 00:16:55.814: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2018, will wait for the garbage collector to delete the pods Apr 18 00:16:55.878: INFO: Deleting DaemonSet.extensions daemon-set took: 6.049257ms Apr 18 00:16:56.178: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.280583ms Apr 18 00:17:02.781: INFO: Number of nodes with available pods: 0 Apr 18 00:17:02.781: INFO: Number of running nodes: 0, number of available pods: 0 Apr 18 00:17:02.783: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2018/daemonsets","resourceVersion":"8932950"},"items":null} Apr 18 00:17:02.785: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2018/pods","resourceVersion":"8932950"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:17:02.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2018" for this suite. • [SLOW TEST:18.200 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":139,"skipped":2298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:17:02.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:17:03.612: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:17:05.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765823, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765823, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765823, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765823, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:17:08.731: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 18 00:17:08.754: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:17:08.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4548" for this suite. STEP: Destroying namespace "webhook-4548-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.045 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":140,"skipped":2352,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:17:08.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 18 00:17:08.968: INFO: Waiting up to 5m0s for pod "pod-43f61a4b-33e0-4519-bf7d-0fb596a0c619" in namespace "emptydir-6048" to be "Succeeded or Failed" Apr 18 00:17:08.983: INFO: Pod "pod-43f61a4b-33e0-4519-bf7d-0fb596a0c619": Phase="Pending", Reason="", readiness=false. Elapsed: 14.927627ms Apr 18 00:17:10.992: INFO: Pod "pod-43f61a4b-33e0-4519-bf7d-0fb596a0c619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024466828s Apr 18 00:17:12.997: INFO: Pod "pod-43f61a4b-33e0-4519-bf7d-0fb596a0c619": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028978025s STEP: Saw pod success Apr 18 00:17:12.997: INFO: Pod "pod-43f61a4b-33e0-4519-bf7d-0fb596a0c619" satisfied condition "Succeeded or Failed" Apr 18 00:17:13.001: INFO: Trying to get logs from node latest-worker pod pod-43f61a4b-33e0-4519-bf7d-0fb596a0c619 container test-container: STEP: delete the pod Apr 18 00:17:13.023: INFO: Waiting for pod pod-43f61a4b-33e0-4519-bf7d-0fb596a0c619 to disappear Apr 18 00:17:13.068: INFO: Pod pod-43f61a4b-33e0-4519-bf7d-0fb596a0c619 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:17:13.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6048" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2369,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:17:13.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-5c034999-66ee-420a-8d00-a92309d68dd4 STEP: Creating a pod to test consume configMaps Apr 18 00:17:13.148: INFO: Waiting up to 5m0s for pod "pod-configmaps-d9d5b715-54cf-49a6-a037-3dc2634f8529" in namespace "configmap-6162" to be "Succeeded or Failed" Apr 18 00:17:13.190: INFO: Pod "pod-configmaps-d9d5b715-54cf-49a6-a037-3dc2634f8529": Phase="Pending", Reason="", readiness=false. Elapsed: 41.577488ms Apr 18 00:17:15.194: INFO: Pod "pod-configmaps-d9d5b715-54cf-49a6-a037-3dc2634f8529": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045741848s Apr 18 00:17:17.199: INFO: Pod "pod-configmaps-d9d5b715-54cf-49a6-a037-3dc2634f8529": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050225864s STEP: Saw pod success Apr 18 00:17:17.199: INFO: Pod "pod-configmaps-d9d5b715-54cf-49a6-a037-3dc2634f8529" satisfied condition "Succeeded or Failed" Apr 18 00:17:17.205: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d9d5b715-54cf-49a6-a037-3dc2634f8529 container configmap-volume-test: STEP: delete the pod Apr 18 00:17:17.257: INFO: Waiting for pod pod-configmaps-d9d5b715-54cf-49a6-a037-3dc2634f8529 to disappear Apr 18 00:17:17.261: INFO: Pod pod-configmaps-d9d5b715-54cf-49a6-a037-3dc2634f8529 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:17:17.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6162" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2369,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:17:17.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 18 00:17:21.357: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:17:21.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8470" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2378,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:17:21.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 18 00:17:21.479: INFO: Waiting up to 5m0s for pod "var-expansion-91fb743c-2333-4464-9549-148835629826" in namespace "var-expansion-7395" to be "Succeeded or Failed" Apr 18 00:17:21.495: INFO: Pod "var-expansion-91fb743c-2333-4464-9549-148835629826": Phase="Pending", Reason="", readiness=false. Elapsed: 16.124529ms Apr 18 00:17:23.498: INFO: Pod "var-expansion-91fb743c-2333-4464-9549-148835629826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019643138s Apr 18 00:17:25.502: INFO: Pod "var-expansion-91fb743c-2333-4464-9549-148835629826": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023704946s STEP: Saw pod success Apr 18 00:17:25.502: INFO: Pod "var-expansion-91fb743c-2333-4464-9549-148835629826" satisfied condition "Succeeded or Failed" Apr 18 00:17:25.505: INFO: Trying to get logs from node latest-worker pod var-expansion-91fb743c-2333-4464-9549-148835629826 container dapi-container: STEP: delete the pod Apr 18 00:17:25.551: INFO: Waiting for pod var-expansion-91fb743c-2333-4464-9549-148835629826 to disappear Apr 18 00:17:25.591: INFO: Pod var-expansion-91fb743c-2333-4464-9549-148835629826 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:17:25.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7395" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2379,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:17:25.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0418 00:17:35.695153 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 18 00:17:35.695: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:17:35.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1380" for this suite. • [SLOW TEST:10.105 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":145,"skipped":2387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:17:35.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:17:36.082: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:17:38.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765856, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765856, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765856, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765856, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:17:41.142: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 18 00:17:45.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-2665 to-be-attached-pod -i -c=container1' Apr 18 00:17:45.358: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:17:45.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2665" for this suite. STEP: Destroying namespace "webhook-2665-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.769 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":146,"skipped":2422,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:17:45.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:17:45.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2769" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2433,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:17:45.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:17:46.401: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:17:48.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765866, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765866, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765866, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765866, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:17:51.467: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:17:51.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9668" for this suite. STEP: Destroying namespace "webhook-9668-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.941 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":148,"skipped":2435,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:17:51.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 18 00:17:51.854: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 18 00:18:03.454: INFO: >>> kubeConfig: /root/.kube/config Apr 18 00:18:05.372: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:18:15.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6171" for this suite. • [SLOW TEST:24.154 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":149,"skipped":2435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:18:15.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 18 00:18:15.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-610' Apr 18 00:18:16.210: INFO: stderr: "" Apr 18 00:18:16.210: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 18 00:18:17.214: INFO: Selector matched 1 pods for map[app:agnhost] Apr 18 00:18:17.214: INFO: Found 0 / 1 Apr 18 00:18:18.214: INFO: Selector matched 1 pods for map[app:agnhost] Apr 18 00:18:18.214: INFO: Found 0 / 1 Apr 18 00:18:19.215: INFO: Selector matched 1 pods for map[app:agnhost] Apr 18 00:18:19.215: INFO: Found 1 / 1 Apr 18 00:18:19.215: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 18 00:18:19.218: INFO: Selector matched 1 pods for map[app:agnhost] Apr 18 00:18:19.218: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 18 00:18:19.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-xkkcb --namespace=kubectl-610 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 18 00:18:19.315: INFO: stderr: "" Apr 18 00:18:19.315: INFO: stdout: "pod/agnhost-master-xkkcb patched\n" STEP: checking annotations Apr 18 00:18:19.326: INFO: Selector matched 1 pods for map[app:agnhost] Apr 18 00:18:19.326: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:18:19.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-610" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":150,"skipped":2466,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:18:19.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 18 00:18:19.420: INFO: Waiting up to 5m0s for pod "downward-api-c3d17ea1-1240-49b3-8ec7-6b63e37910b2" in namespace "downward-api-6636" to be "Succeeded or Failed" Apr 18 00:18:19.423: INFO: Pod "downward-api-c3d17ea1-1240-49b3-8ec7-6b63e37910b2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.18985ms Apr 18 00:18:21.526: INFO: Pod "downward-api-c3d17ea1-1240-49b3-8ec7-6b63e37910b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105962685s Apr 18 00:18:23.538: INFO: Pod "downward-api-c3d17ea1-1240-49b3-8ec7-6b63e37910b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11756668s STEP: Saw pod success Apr 18 00:18:23.538: INFO: Pod "downward-api-c3d17ea1-1240-49b3-8ec7-6b63e37910b2" satisfied condition "Succeeded or Failed" Apr 18 00:18:23.541: INFO: Trying to get logs from node latest-worker pod downward-api-c3d17ea1-1240-49b3-8ec7-6b63e37910b2 container dapi-container: STEP: delete the pod Apr 18 00:18:23.568: INFO: Waiting for pod downward-api-c3d17ea1-1240-49b3-8ec7-6b63e37910b2 to disappear Apr 18 00:18:23.586: INFO: Pod downward-api-c3d17ea1-1240-49b3-8ec7-6b63e37910b2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:18:23.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6636" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2470,"failed":0} ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:18:23.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-7354 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7354 to expose endpoints map[] Apr 18 00:18:23.688: INFO: Get endpoints failed (13.680634ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 18 00:18:24.712: INFO: successfully validated that service multi-endpoint-test in namespace services-7354 exposes endpoints map[] (1.037917056s elapsed) STEP: Creating pod pod1 in namespace services-7354 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7354 to expose endpoints map[pod1:[100]] Apr 18 00:18:27.785: INFO: successfully validated that service multi-endpoint-test in namespace services-7354 exposes endpoints map[pod1:[100]] (3.06668169s elapsed) STEP: Creating pod pod2 in namespace services-7354 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7354 to expose endpoints map[pod1:[100] pod2:[101]] Apr 18 00:18:31.874: INFO: successfully validated that service multi-endpoint-test in namespace services-7354 exposes endpoints map[pod1:[100] pod2:[101]] (4.085552469s elapsed) STEP: Deleting pod pod1 in namespace services-7354 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7354 to expose endpoints map[pod2:[101]] Apr 18 00:18:32.958: INFO: successfully validated that service multi-endpoint-test in namespace services-7354 exposes endpoints map[pod2:[101]] (1.079360149s elapsed) STEP: Deleting pod pod2 in namespace services-7354 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7354 to expose endpoints map[] Apr 18 00:18:34.065: INFO: successfully validated that service multi-endpoint-test in namespace services-7354 exposes endpoints map[] (1.103202797s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:18:34.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7354" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.519 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":152,"skipped":2470,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:18:34.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-3447ced2-6306-4e11-8b1b-ac646134294f STEP: Creating a pod to test consume configMaps Apr 18 00:18:34.186: INFO: Waiting up to 5m0s for pod "pod-configmaps-a9dfb881-bd54-4036-abc7-0ad4721e0cb6" in namespace "configmap-4950" to be "Succeeded or Failed" Apr 18 00:18:34.191: INFO: Pod "pod-configmaps-a9dfb881-bd54-4036-abc7-0ad4721e0cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663995ms Apr 18 00:18:36.195: INFO: Pod "pod-configmaps-a9dfb881-bd54-4036-abc7-0ad4721e0cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00869256s Apr 18 00:18:38.200: INFO: Pod "pod-configmaps-a9dfb881-bd54-4036-abc7-0ad4721e0cb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013085925s STEP: Saw pod success Apr 18 00:18:38.200: INFO: Pod "pod-configmaps-a9dfb881-bd54-4036-abc7-0ad4721e0cb6" satisfied condition "Succeeded or Failed" Apr 18 00:18:38.203: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-a9dfb881-bd54-4036-abc7-0ad4721e0cb6 container configmap-volume-test: STEP: delete the pod Apr 18 00:18:38.252: INFO: Waiting for pod pod-configmaps-a9dfb881-bd54-4036-abc7-0ad4721e0cb6 to disappear Apr 18 00:18:38.263: INFO: Pod pod-configmaps-a9dfb881-bd54-4036-abc7-0ad4721e0cb6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:18:38.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4950" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2470,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:18:38.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 18 00:18:38.336: INFO: Waiting up to 5m0s for pod "client-containers-a564e9ea-a518-4c10-9fc6-263eb41096c4" in namespace "containers-3683" to be "Succeeded or Failed" Apr 18 00:18:38.340: INFO: Pod "client-containers-a564e9ea-a518-4c10-9fc6-263eb41096c4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.742147ms Apr 18 00:18:40.344: INFO: Pod "client-containers-a564e9ea-a518-4c10-9fc6-263eb41096c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007524355s Apr 18 00:18:42.347: INFO: Pod "client-containers-a564e9ea-a518-4c10-9fc6-263eb41096c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010766667s STEP: Saw pod success Apr 18 00:18:42.347: INFO: Pod "client-containers-a564e9ea-a518-4c10-9fc6-263eb41096c4" satisfied condition "Succeeded or Failed" Apr 18 00:18:42.350: INFO: Trying to get logs from node latest-worker pod client-containers-a564e9ea-a518-4c10-9fc6-263eb41096c4 container test-container: STEP: delete the pod Apr 18 00:18:42.378: INFO: Waiting for pod client-containers-a564e9ea-a518-4c10-9fc6-263eb41096c4 to disappear Apr 18 00:18:42.382: INFO: Pod client-containers-a564e9ea-a518-4c10-9fc6-263eb41096c4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:18:42.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3683" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:18:42.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 18 00:18:42.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4734' Apr 18 00:18:42.534: INFO: stderr: "" Apr 18 00:18:42.534: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 18 00:18:42.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4734' Apr 18 00:18:46.036: INFO: stderr: "" Apr 18 00:18:46.036: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:18:46.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4734" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":155,"skipped":2505,"failed":0} S ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:18:46.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 18 00:18:46.134: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7697" to be "Succeeded or Failed" Apr 18 00:18:46.144: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.357177ms Apr 18 00:18:48.149: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01515204s Apr 18 00:18:50.153: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019083264s STEP: Saw pod success Apr 18 00:18:50.153: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 18 00:18:50.156: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 18 00:18:50.175: INFO: Waiting for pod pod-host-path-test to disappear Apr 18 00:18:50.203: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:18:50.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7697" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2506,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:18:50.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:18:50.455: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 18 00:18:55.466: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 18 00:18:55.467: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 18 00:18:57.470: INFO: Creating deployment "test-rollover-deployment" Apr 18 00:18:57.499: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 18 00:18:59.520: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 18 00:18:59.526: INFO: Ensure that both replica sets have 1 created replica Apr 18 00:18:59.534: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 18 00:18:59.539: INFO: Updating deployment test-rollover-deployment Apr 18 00:18:59.539: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 18 00:19:01.881: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 18 00:19:01.887: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 18 00:19:01.894: INFO: all replica sets need to contain the pod-template-hash label Apr 18 00:19:01.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765939, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:19:03.902: INFO: all replica sets need to contain the pod-template-hash label Apr 18 00:19:03.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765939, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:19:05.902: INFO: all replica sets need to contain the pod-template-hash label Apr 18 00:19:05.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765943, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:19:07.900: INFO: all replica sets need to contain the pod-template-hash label Apr 18 00:19:07.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765943, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:19:09.901: INFO: all replica sets need to contain the pod-template-hash label Apr 18 00:19:09.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765943, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:19:11.902: INFO: all replica sets need to contain the pod-template-hash label Apr 18 00:19:11.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765943, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:19:13.902: INFO: all replica sets need to contain the pod-template-hash label Apr 18 00:19:13.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765943, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722765937, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:19:15.902: INFO: Apr 18 00:19:15.903: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 18 00:19:15.911: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6657 /apis/apps/v1/namespaces/deployment-6657/deployments/test-rollover-deployment 199288fd-33aa-4e9e-8571-7320140ea0b2 8934054 2 2020-04-18 00:18:57 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0054314b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-18 00:18:57 +0000 UTC,LastTransitionTime:2020-04-18 00:18:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-18 00:19:14 +0000 UTC,LastTransitionTime:2020-04-18 00:18:57 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 18 00:19:15.914: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-6657 /apis/apps/v1/namespaces/deployment-6657/replicasets/test-rollover-deployment-78df7bc796 10783f97-8037-49c3-a548-e23ea532e821 8934043 2 2020-04-18 00:18:59 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 199288fd-33aa-4e9e-8571-7320140ea0b2 0xc0051e28d7 0xc0051e28d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0051e2978 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 18 00:19:15.914: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 18 00:19:15.914: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6657 /apis/apps/v1/namespaces/deployment-6657/replicasets/test-rollover-controller af896365-51b9-46c6-bef7-855a7c526312 8934052 2 2020-04-18 00:18:50 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 199288fd-33aa-4e9e-8571-7320140ea0b2 0xc0051e279f 0xc0051e27c0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0051e2848 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 18 00:19:15.914: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6657 /apis/apps/v1/namespaces/deployment-6657/replicasets/test-rollover-deployment-f6c94f66c d4c68098-8b28-4e9f-b2b2-64bcf0ca5812 8933990 2 2020-04-18 00:18:57 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 199288fd-33aa-4e9e-8571-7320140ea0b2 0xc0051e2a10 0xc0051e2a11}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0051e2aa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 18 00:19:15.918: INFO: Pod "test-rollover-deployment-78df7bc796-c7fq7" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-c7fq7 test-rollover-deployment-78df7bc796- deployment-6657 /api/v1/namespaces/deployment-6657/pods/test-rollover-deployment-78df7bc796-c7fq7 88d0333d-0788-4023-b91e-10a48c9d6b9f 8934010 0 2020-04-18 00:18:59 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 10783f97-8037-49c3-a548-e23ea532e821 0xc0051e3197 0xc0051e3198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qqqds,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qqqds,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qqqds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:18:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:18:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.201,StartTime:2020-04-18 00:18:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-18 00:19:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://98214ba3b2298a22612759ba980d25ee1aafb13fedba10d952e92acb13e56646,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:19:15.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6657" for this suite. • [SLOW TEST:25.716 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":157,"skipped":2509,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:19:15.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:19:16.001: INFO: Waiting up to 5m0s for pod "downwardapi-volume-019a547f-b0c1-4dd8-bebc-22a5750d9bc0" in namespace "downward-api-4085" to be "Succeeded or Failed" Apr 18 00:19:16.007: INFO: Pod "downwardapi-volume-019a547f-b0c1-4dd8-bebc-22a5750d9bc0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.454797ms Apr 18 00:19:18.010: INFO: Pod "downwardapi-volume-019a547f-b0c1-4dd8-bebc-22a5750d9bc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008946736s Apr 18 00:19:20.014: INFO: Pod "downwardapi-volume-019a547f-b0c1-4dd8-bebc-22a5750d9bc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012818216s STEP: Saw pod success Apr 18 00:19:20.014: INFO: Pod "downwardapi-volume-019a547f-b0c1-4dd8-bebc-22a5750d9bc0" satisfied condition "Succeeded or Failed" Apr 18 00:19:20.017: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-019a547f-b0c1-4dd8-bebc-22a5750d9bc0 container client-container: STEP: delete the pod Apr 18 00:19:20.050: INFO: Waiting for pod downwardapi-volume-019a547f-b0c1-4dd8-bebc-22a5750d9bc0 to disappear Apr 18 00:19:20.061: INFO: Pod downwardapi-volume-019a547f-b0c1-4dd8-bebc-22a5750d9bc0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:19:20.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4085" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:19:20.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 18 00:19:20.117: INFO: Waiting up to 5m0s for pod "downward-api-ab0e7852-da80-4993-934a-c1daa84f01d6" in namespace "downward-api-2981" to be "Succeeded or Failed" Apr 18 00:19:20.138: INFO: Pod "downward-api-ab0e7852-da80-4993-934a-c1daa84f01d6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.605205ms Apr 18 00:19:22.142: INFO: Pod "downward-api-ab0e7852-da80-4993-934a-c1daa84f01d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024965936s Apr 18 00:19:24.146: INFO: Pod "downward-api-ab0e7852-da80-4993-934a-c1daa84f01d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028984488s STEP: Saw pod success Apr 18 00:19:24.146: INFO: Pod "downward-api-ab0e7852-da80-4993-934a-c1daa84f01d6" satisfied condition "Succeeded or Failed" Apr 18 00:19:24.149: INFO: Trying to get logs from node latest-worker pod downward-api-ab0e7852-da80-4993-934a-c1daa84f01d6 container dapi-container: STEP: delete the pod Apr 18 00:19:24.170: INFO: Waiting for pod downward-api-ab0e7852-da80-4993-934a-c1daa84f01d6 to disappear Apr 18 00:19:24.174: INFO: Pod downward-api-ab0e7852-da80-4993-934a-c1daa84f01d6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:19:24.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2981" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2548,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:19:24.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:19:24.231: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 18 00:19:27.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3710 create -f -' Apr 18 00:19:30.178: INFO: stderr: "" Apr 18 00:19:30.178: INFO: stdout: "e2e-test-crd-publish-openapi-9278-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 18 00:19:30.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3710 delete e2e-test-crd-publish-openapi-9278-crds test-foo' Apr 18 00:19:30.275: INFO: stderr: "" Apr 18 00:19:30.275: INFO: stdout: "e2e-test-crd-publish-openapi-9278-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 18 00:19:30.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3710 apply -f -' Apr 18 00:19:30.542: INFO: stderr: "" Apr 18 00:19:30.542: INFO: stdout: "e2e-test-crd-publish-openapi-9278-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 18 00:19:30.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3710 delete e2e-test-crd-publish-openapi-9278-crds test-foo' Apr 18 00:19:30.647: INFO: stderr: "" Apr 18 00:19:30.647: INFO: stdout: "e2e-test-crd-publish-openapi-9278-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 18 00:19:30.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3710 create -f -' Apr 18 00:19:30.894: INFO: rc: 1 Apr 18 00:19:30.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3710 apply -f -' Apr 18 00:19:31.113: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 18 00:19:31.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3710 create -f -' Apr 18 00:19:31.329: INFO: rc: 1 Apr 18 00:19:31.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3710 apply -f -' Apr 18 00:19:31.561: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 18 00:19:31.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9278-crds' Apr 18 00:19:31.852: INFO: stderr: "" Apr 18 00:19:31.852: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9278-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 18 00:19:31.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9278-crds.metadata' Apr 18 00:19:32.103: INFO: stderr: "" Apr 18 00:19:32.103: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9278-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 18 00:19:32.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9278-crds.spec' Apr 18 00:19:32.354: INFO: stderr: "" Apr 18 00:19:32.354: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9278-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 18 00:19:32.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9278-crds.spec.bars' Apr 18 00:19:32.589: INFO: stderr: "" Apr 18 00:19:32.589: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9278-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 18 00:19:32.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9278-crds.spec.bars2' Apr 18 00:19:32.833: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:19:35.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3710" for this suite. • [SLOW TEST:11.560 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":160,"skipped":2549,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:19:35.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 18 00:19:35.785: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix645299852/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:19:35.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8094" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":161,"skipped":2617,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:19:35.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:19:35.919: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:19:42.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1015" for this suite. • [SLOW TEST:6.345 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":162,"skipped":2635,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:19:42.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:19:42.255: INFO: Creating deployment "webserver-deployment" Apr 18 00:19:42.271: INFO: Waiting for observed generation 1 Apr 18 00:19:44.336: INFO: Waiting for all required pods to come up Apr 18 00:19:44.340: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 18 00:19:52.350: INFO: Waiting for deployment "webserver-deployment" to complete Apr 18 00:19:52.356: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 18 00:19:52.362: INFO: Updating deployment webserver-deployment Apr 18 00:19:52.362: INFO: Waiting for observed generation 2 Apr 18 00:19:54.408: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 18 00:19:54.410: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 18 00:19:54.412: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 18 00:19:54.417: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 18 00:19:54.417: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 18 00:19:54.419: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 18 00:19:54.422: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 18 00:19:54.422: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 18 00:19:54.427: INFO: Updating deployment webserver-deployment Apr 18 00:19:54.427: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 18 00:19:54.570: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 18 00:19:54.596: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 18 00:19:54.828: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7431 /apis/apps/v1/namespaces/deployment-7431/deployments/webserver-deployment ab511ef5-482b-428a-a5f6-9efbef902657 8934522 3 2020-04-18 00:19:42 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004372458 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-18 00:19:52 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-18 00:19:54 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 18 00:19:54.935: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-7431 /apis/apps/v1/namespaces/deployment-7431/replicasets/webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 8934582 3 2020-04-18 00:19:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment ab511ef5-482b-428a-a5f6-9efbef902657 0xc004372b77 0xc004372b78}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004372c38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 18 00:19:54.935: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 18 00:19:54.935: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-7431 /apis/apps/v1/namespaces/deployment-7431/replicasets/webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 8934559 3 2020-04-18 00:19:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ab511ef5-482b-428a-a5f6-9efbef902657 0xc004372a87 0xc004372a88}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004372af8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 18 00:19:55.040: INFO: Pod "webserver-deployment-595b5b9587-66nmj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-66nmj webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-66nmj a1014a63-c63c-4fba-bbc0-4035c90f6e33 8934532 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc004373307 0xc004373308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.040: INFO: Pod "webserver-deployment-595b5b9587-7cx62" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7cx62 webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-7cx62 126f1a7c-32d8-4b3f-b411-e715eabb9ea1 8934529 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc004373457 0xc004373458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.040: INFO: Pod "webserver-deployment-595b5b9587-8kwrr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8kwrr webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-8kwrr 3758f979-3416-48ed-909c-e21c7235b100 8934413 0 2020-04-18 00:19:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc0043735e7 0xc0043735e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.250,StartTime:2020-04-18 00:19:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-18 00:19:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d70d587bb3d634f156971eb0d655219e827d8996ea9574544eb55ceebe1d7a49,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.250,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.040: INFO: Pod "webserver-deployment-595b5b9587-9tbw2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9tbw2 webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-9tbw2 059219a2-6243-41cc-bf8f-1a2853d36064 8934432 0 2020-04-18 00:19:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc0043737b7 0xc0043737b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.204,StartTime:2020-04-18 00:19:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-18 00:19:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://06829500ea4f52c220d7dc58ec1dc3680e3d647bc9de89f1fcd0fbc588e84264,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.040: INFO: Pod "webserver-deployment-595b5b9587-bf8l8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bf8l8 webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-bf8l8 f8a3a220-dbe2-4eff-9738-54b1c19c6165 8934448 0 2020-04-18 00:19:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc004373947 0xc004373948}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.205,StartTime:2020-04-18 00:19:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-18 00:19:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://924b9e782986843470439c2e090056537a0b95f3bb01737386119e4fadc6d2bc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.205,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.041: INFO: Pod "webserver-deployment-595b5b9587-bw5n6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bw5n6 webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-bw5n6 ce4375c2-7b10-453d-8155-5a2864177e8f 8934436 0 2020-04-18 00:19:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc004373ac7 0xc004373ac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.202,StartTime:2020-04-18 00:19:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-18 00:19:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ef181e702a178fe9e2102809f87faa9aae1fba699639f70c0249296f984a27cb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.202,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.041: INFO: Pod "webserver-deployment-595b5b9587-gpdn6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gpdn6 webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-gpdn6 23f71a3c-e25b-4d12-954c-2f0da13228e3 8934537 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc004373cb7 0xc004373cb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.041: INFO: Pod "webserver-deployment-595b5b9587-gw758" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gw758 webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-gw758 e6cb2ab3-1449-4871-86a1-db71e0f453c9 8934451 0 2020-04-18 00:19:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc004373de7 0xc004373de8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.206,StartTime:2020-04-18 00:19:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-18 00:19:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1c998343510d3af4b6a3925e79b8c4bd4461b83cd00f9cc10d25863526b8b1f6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.041: INFO: Pod "webserver-deployment-595b5b9587-h8cmw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h8cmw webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-h8cmw 525cc23c-95a3-4a84-8301-5fd8d12d2a92 8934584 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc004373fd7 0xc004373fd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-18 00:19:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.041: INFO: Pod "webserver-deployment-595b5b9587-j5sfh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-j5sfh webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-j5sfh 140d21c9-5771-485f-9ff8-c4443ead9bbc 8934563 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc00341a237 0xc00341a238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.042: INFO: Pod "webserver-deployment-595b5b9587-kvjnm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kvjnm webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-kvjnm ff7792aa-bad5-41d6-9816-f2bab879777b 8934407 0 2020-04-18 00:19:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc00341a3a7 0xc00341a3a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.249,StartTime:2020-04-18 00:19:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-18 00:19:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://59c26e3b3a8a7bca7d0d89c37c80aed3d970ae3383f22a8e2c38022202225fda,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.249,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.042: INFO: Pod "webserver-deployment-595b5b9587-lbhrf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lbhrf webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-lbhrf 77127558-8d7e-4201-bcd5-0a3cc936360c 8934566 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc00341a5c7 0xc00341a5c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.042: INFO: Pod "webserver-deployment-595b5b9587-lrnjl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lrnjl webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-lrnjl 1788ff3c-c253-4136-82de-f43514b0d0d3 8934414 0 2020-04-18 00:19:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc00341a787 0xc00341a788}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.203,StartTime:2020-04-18 00:19:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-18 00:19:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a6b3013ed711f54ad3bbe44a5837232f8dd3faa1aff31d16505e34b4a9987ba4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.203,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.042: INFO: Pod "webserver-deployment-595b5b9587-n4jdr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n4jdr webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-n4jdr 9ce5611a-745f-42b1-a112-7b6d5b90dd8f 8934567 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc00341a947 0xc00341a948}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-18 00:19:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.042: INFO: Pod "webserver-deployment-595b5b9587-qnfm6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qnfm6 webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-qnfm6 9a03780b-389f-4bfb-b877-0346bcef036b 8934565 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc00341aaf7 0xc00341aaf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.042: INFO: Pod "webserver-deployment-595b5b9587-qvjgv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qvjgv webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-qvjgv 2ec79902-4435-4c91-b7ca-43ea7838cf03 8934564 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc00341aca7 0xc00341aca8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.042: INFO: Pod "webserver-deployment-595b5b9587-rzjm6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rzjm6 webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-rzjm6 32fe571d-b34d-4cc7-a0e7-758b25e2a4e5 8934531 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc00341add7 0xc00341add8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.042: INFO: Pod "webserver-deployment-595b5b9587-v5ckn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v5ckn webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-v5ckn 8998b972-6686-40bd-b9d9-b49c5aec3a8a 8934560 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc00341af67 0xc00341af68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.043: INFO: Pod "webserver-deployment-595b5b9587-wqhbv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wqhbv webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-wqhbv ab87dd84-fe15-49be-b1da-b2135e00eeba 8934533 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc00341b0c7 0xc00341b0c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.043: INFO: Pod "webserver-deployment-595b5b9587-z7xvf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z7xvf webserver-deployment-595b5b9587- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-595b5b9587-z7xvf 8ff35567-8abf-4d24-ad99-cb61ecc25f8d 8934401 0 2020-04-18 00:19:42 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 7572baea-07f3-424f-a8d5-9aba45c7abec 0xc00341b207 0xc00341b208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.248,StartTime:2020-04-18 00:19:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-18 00:19:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7a42bcff1f10ede469593b9d804467f61fbd30b85df3cdc8e41e1bc5897b75a9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.043: INFO: Pod "webserver-deployment-c7997dcc8-7k22q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7k22q webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-7k22q 6cbb04e3-18a4-407f-b467-152f75964e59 8934581 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc00341b437 0xc00341b438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-18 00:19:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.043: INFO: Pod "webserver-deployment-c7997dcc8-8mvtd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8mvtd webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-8mvtd 62cac0ae-5965-4f50-a69c-8e318dacd0b5 8934478 0 2020-04-18 00:19:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc00341b667 0xc00341b668}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-18 00:19:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.043: INFO: Pod "webserver-deployment-c7997dcc8-8sp6j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8sp6j webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-8sp6j 7ef63472-0d26-475a-ab13-c043339f9fa8 8934492 0 2020-04-18 00:19:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc00341b867 0xc00341b868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-18 00:19:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.043: INFO: Pod "webserver-deployment-c7997dcc8-b7qcg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b7qcg webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-b7qcg 37caf031-db2f-4323-9a7b-f05b5b2060c9 8934505 0 2020-04-18 00:19:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc00341bad7 0xc00341bad8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-18 00:19:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.043: INFO: Pod "webserver-deployment-c7997dcc8-c8vtr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c8vtr webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-c8vtr cf4a58e2-8a6b-4ad8-a04b-2384cd654d45 8934476 0 2020-04-18 00:19:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc00341bd27 0xc00341bd28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-18 00:19:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.043: INFO: Pod "webserver-deployment-c7997dcc8-dx858" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dx858 webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-dx858 d5bf04e5-a23d-4cb9-9028-3187c4b608ea 8934502 0 2020-04-18 00:19:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc00341bed7 0xc00341bed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-18 00:19:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.044: INFO: Pod "webserver-deployment-c7997dcc8-frw6t" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-frw6t webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-frw6t af91e772-fb29-4d5f-8b7d-03512e2d0670 8934534 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc0035b0107 0xc0035b0108}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.044: INFO: Pod "webserver-deployment-c7997dcc8-gznkf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gznkf webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-gznkf 2c077ec5-2a0d-455c-ab89-edf7665683d2 8934555 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc0035b0287 0xc0035b0288}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.044: INFO: Pod "webserver-deployment-c7997dcc8-pm22g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pm22g webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-pm22g 6c212493-3b87-46aa-b825-04be072e612d 8934535 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc0035b03b7 0xc0035b03b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.044: INFO: Pod "webserver-deployment-c7997dcc8-skbgl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-skbgl webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-skbgl 9d25882c-6491-4119-af66-29e8164ee059 8934561 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc0035b04f7 0xc0035b04f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.044: INFO: Pod "webserver-deployment-c7997dcc8-smxnp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-smxnp webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-smxnp 84bf7b67-91eb-49c9-9930-ee9ba7449dc9 8934558 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc0035b06c7 0xc0035b06c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.044: INFO: Pod "webserver-deployment-c7997dcc8-vd4gm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vd4gm webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-vd4gm 7a5b89ca-067e-4f52-90bc-ddbd79c91236 8934568 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc0035b0847 0xc0035b0848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 18 00:19:55.044: INFO: Pod "webserver-deployment-c7997dcc8-wnr2g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wnr2g webserver-deployment-c7997dcc8- deployment-7431 /api/v1/namespaces/deployment-7431/pods/webserver-deployment-c7997dcc8-wnr2g a946c67f-4423-4c5c-b7a0-3a82fe147ecc 8934562 0 2020-04-18 00:19:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bd184058-26bb-4c8d-b61d-10b3db663434 0xc0035b09d7 0xc0035b09d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zg6t4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zg6t4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zg6t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:19:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:19:55.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7431" for this suite. • [SLOW TEST:12.986 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":163,"skipped":2642,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:19:55.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:20:10.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7344" for this suite. • [SLOW TEST:15.292 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":164,"skipped":2648,"failed":0} SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:20:10.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 18 00:20:15.795: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1258 pod-service-account-ed26505f-5289-4b2f-9de5-483f4d2aad9d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 18 00:20:16.009: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1258 pod-service-account-ed26505f-5289-4b2f-9de5-483f4d2aad9d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 18 00:20:16.222: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1258 pod-service-account-ed26505f-5289-4b2f-9de5-483f4d2aad9d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:20:16.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1258" for this suite. • [SLOW TEST:5.955 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":165,"skipped":2658,"failed":0} [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:20:16.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 18 00:20:17.606: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 18 00:20:19.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766017, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766017, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766017, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766017, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:20:22.643: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:20:22.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:20:23.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1884" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.557 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":166,"skipped":2658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:20:24.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9035.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9035.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9035.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9035.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9035.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9035.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 18 00:20:30.518: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:30.521: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:30.524: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:30.529: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:30.538: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:30.540: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:30.543: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:30.546: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:30.551: INFO: Lookups using dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local] Apr 18 00:20:35.557: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:35.560: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:35.564: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:35.567: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:35.577: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:35.580: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:35.583: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:35.586: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:35.593: INFO: Lookups using dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local] Apr 18 00:20:40.556: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:40.560: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:40.563: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:40.566: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:40.574: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:40.576: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:40.579: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:40.581: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:40.587: INFO: Lookups using dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local] Apr 18 00:20:45.557: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:45.560: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:45.564: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:45.568: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:45.580: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:45.583: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:45.585: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:45.588: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:45.592: INFO: Lookups using dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local] Apr 18 00:20:50.557: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:50.560: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:50.564: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:50.568: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:50.578: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:50.582: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:50.585: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:50.588: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:50.592: INFO: Lookups using dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local] Apr 18 00:20:55.679: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:55.706: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:55.710: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:55.713: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:55.722: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:55.725: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:55.728: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:55.731: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local from pod dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c: the server could not find the requested resource (get pods dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c) Apr 18 00:20:55.737: INFO: Lookups using dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9035.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9035.svc.cluster.local jessie_udp@dns-test-service-2.dns-9035.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9035.svc.cluster.local] Apr 18 00:21:00.592: INFO: DNS probes using dns-9035/dns-test-f8955e74-adb3-442a-8ffb-dd795a11c87c succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:21:00.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9035" for this suite. • [SLOW TEST:37.241 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":167,"skipped":2683,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:21:01.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-5342 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 18 00:21:01.372: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 18 00:21:01.411: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 18 00:21:03.439: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 18 00:21:05.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:07.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:09.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:11.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:13.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:15.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:17.414: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:19.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:21.415: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 18 00:21:21.421: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 18 00:21:25.447: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.17:8080/dial?request=hostname&protocol=udp&host=10.244.2.16&port=8081&tries=1'] Namespace:pod-network-test-5342 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:21:25.447: INFO: >>> kubeConfig: /root/.kube/config I0418 00:21:25.486516 7 log.go:172] (0xc002ada8f0) (0xc002c295e0) Create stream I0418 00:21:25.486564 7 log.go:172] (0xc002ada8f0) (0xc002c295e0) Stream added, broadcasting: 1 I0418 00:21:25.488370 7 log.go:172] (0xc002ada8f0) Reply frame received for 1 I0418 00:21:25.488420 7 log.go:172] (0xc002ada8f0) (0xc002c29720) Create stream I0418 00:21:25.488446 7 log.go:172] (0xc002ada8f0) (0xc002c29720) Stream added, broadcasting: 3 I0418 00:21:25.489603 7 log.go:172] (0xc002ada8f0) Reply frame received for 3 I0418 00:21:25.489645 7 log.go:172] (0xc002ada8f0) (0xc002c29900) Create stream I0418 00:21:25.489659 7 log.go:172] (0xc002ada8f0) (0xc002c29900) Stream added, broadcasting: 5 I0418 00:21:25.490737 7 log.go:172] (0xc002ada8f0) Reply frame received for 5 I0418 00:21:25.583483 7 log.go:172] (0xc002ada8f0) Data frame received for 3 I0418 00:21:25.583510 7 log.go:172] (0xc002c29720) (3) Data frame handling I0418 00:21:25.583523 7 log.go:172] (0xc002c29720) (3) Data frame sent I0418 00:21:25.583962 7 log.go:172] (0xc002ada8f0) Data frame received for 3 I0418 00:21:25.583983 7 log.go:172] (0xc002c29720) (3) Data frame handling I0418 00:21:25.584003 7 log.go:172] (0xc002ada8f0) Data frame received for 5 I0418 00:21:25.584013 7 log.go:172] (0xc002c29900) (5) Data frame handling I0418 00:21:25.585906 7 log.go:172] (0xc002ada8f0) Data frame received for 1 I0418 00:21:25.585918 7 log.go:172] (0xc002c295e0) (1) Data frame handling I0418 00:21:25.585926 7 log.go:172] (0xc002c295e0) (1) Data frame sent I0418 00:21:25.586094 7 log.go:172] (0xc002ada8f0) (0xc002c295e0) Stream removed, broadcasting: 1 I0418 00:21:25.586156 7 log.go:172] (0xc002ada8f0) (0xc002c295e0) Stream removed, broadcasting: 1 I0418 00:21:25.586164 7 log.go:172] (0xc002ada8f0) (0xc002c29720) Stream removed, broadcasting: 3 I0418 00:21:25.586170 7 log.go:172] (0xc002ada8f0) (0xc002c29900) Stream removed, broadcasting: 5 Apr 18 00:21:25.586: INFO: Waiting for responses: map[] I0418 00:21:25.586224 7 log.go:172] (0xc002ada8f0) Go away received Apr 18 00:21:25.589: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.17:8080/dial?request=hostname&protocol=udp&host=10.244.1.219&port=8081&tries=1'] Namespace:pod-network-test-5342 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:21:25.589: INFO: >>> kubeConfig: /root/.kube/config I0418 00:21:25.623106 7 log.go:172] (0xc002c33340) (0xc00111e460) Create stream I0418 00:21:25.623155 7 log.go:172] (0xc002c33340) (0xc00111e460) Stream added, broadcasting: 1 I0418 00:21:25.625050 7 log.go:172] (0xc002c33340) Reply frame received for 1 I0418 00:21:25.625101 7 log.go:172] (0xc002c33340) (0xc002c29ae0) Create stream I0418 00:21:25.625261 7 log.go:172] (0xc002c33340) (0xc002c29ae0) Stream added, broadcasting: 3 I0418 00:21:25.626218 7 log.go:172] (0xc002c33340) Reply frame received for 3 I0418 00:21:25.626247 7 log.go:172] (0xc002c33340) (0xc000ddd7c0) Create stream I0418 00:21:25.626260 7 log.go:172] (0xc002c33340) (0xc000ddd7c0) Stream added, broadcasting: 5 I0418 00:21:25.627127 7 log.go:172] (0xc002c33340) Reply frame received for 5 I0418 00:21:25.700459 7 log.go:172] (0xc002c33340) Data frame received for 3 I0418 00:21:25.700490 7 log.go:172] (0xc002c29ae0) (3) Data frame handling I0418 00:21:25.700511 7 log.go:172] (0xc002c29ae0) (3) Data frame sent I0418 00:21:25.701095 7 log.go:172] (0xc002c33340) Data frame received for 5 I0418 00:21:25.701194 7 log.go:172] (0xc000ddd7c0) (5) Data frame handling I0418 00:21:25.701311 7 log.go:172] (0xc002c33340) Data frame received for 3 I0418 00:21:25.701346 7 log.go:172] (0xc002c29ae0) (3) Data frame handling I0418 00:21:25.702978 7 log.go:172] (0xc002c33340) Data frame received for 1 I0418 00:21:25.703009 7 log.go:172] (0xc00111e460) (1) Data frame handling I0418 00:21:25.703041 7 log.go:172] (0xc00111e460) (1) Data frame sent I0418 00:21:25.703061 7 log.go:172] (0xc002c33340) (0xc00111e460) Stream removed, broadcasting: 1 I0418 00:21:25.703079 7 log.go:172] (0xc002c33340) Go away received I0418 00:21:25.703240 7 log.go:172] (0xc002c33340) (0xc00111e460) Stream removed, broadcasting: 1 I0418 00:21:25.703267 7 log.go:172] (0xc002c33340) (0xc002c29ae0) Stream removed, broadcasting: 3 I0418 00:21:25.703279 7 log.go:172] (0xc002c33340) (0xc000ddd7c0) Stream removed, broadcasting: 5 Apr 18 00:21:25.703: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:21:25.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5342" for this suite. • [SLOW TEST:24.468 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":2692,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:21:25.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2337 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 18 00:21:25.793: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 18 00:21:25.867: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 18 00:21:28.006: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 18 00:21:29.936: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 18 00:21:32.008: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:33.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:35.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:37.883: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:39.872: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:41.872: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:43.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 18 00:21:45.872: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 18 00:21:45.878: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 18 00:21:47.882: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 18 00:21:49.883: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 18 00:21:53.919: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.18 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2337 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:21:53.919: INFO: >>> kubeConfig: /root/.kube/config I0418 00:21:53.957697 7 log.go:172] (0xc002c338c0) (0xc00111f680) Create stream I0418 00:21:53.957726 7 log.go:172] (0xc002c338c0) (0xc00111f680) Stream added, broadcasting: 1 I0418 00:21:53.959472 7 log.go:172] (0xc002c338c0) Reply frame received for 1 I0418 00:21:53.959516 7 log.go:172] (0xc002c338c0) (0xc00111f9a0) Create stream I0418 00:21:53.959529 7 log.go:172] (0xc002c338c0) (0xc00111f9a0) Stream added, broadcasting: 3 I0418 00:21:53.960377 7 log.go:172] (0xc002c338c0) Reply frame received for 3 I0418 00:21:53.960443 7 log.go:172] (0xc002c338c0) (0xc0012b0460) Create stream I0418 00:21:53.960463 7 log.go:172] (0xc002c338c0) (0xc0012b0460) Stream added, broadcasting: 5 I0418 00:21:53.961498 7 log.go:172] (0xc002c338c0) Reply frame received for 5 I0418 00:21:55.054803 7 log.go:172] (0xc002c338c0) Data frame received for 5 I0418 00:21:55.054879 7 log.go:172] (0xc0012b0460) (5) Data frame handling I0418 00:21:55.054918 7 log.go:172] (0xc002c338c0) Data frame received for 3 I0418 00:21:55.054931 7 log.go:172] (0xc00111f9a0) (3) Data frame handling I0418 00:21:55.054952 7 log.go:172] (0xc00111f9a0) (3) Data frame sent I0418 00:21:55.055160 7 log.go:172] (0xc002c338c0) Data frame received for 3 I0418 00:21:55.055200 7 log.go:172] (0xc00111f9a0) (3) Data frame handling I0418 00:21:55.056903 7 log.go:172] (0xc002c338c0) Data frame received for 1 I0418 00:21:55.056957 7 log.go:172] (0xc00111f680) (1) Data frame handling I0418 00:21:55.057013 7 log.go:172] (0xc00111f680) (1) Data frame sent I0418 00:21:55.057042 7 log.go:172] (0xc002c338c0) (0xc00111f680) Stream removed, broadcasting: 1 I0418 00:21:55.057067 7 log.go:172] (0xc002c338c0) Go away received I0418 00:21:55.057285 7 log.go:172] (0xc002c338c0) (0xc00111f680) Stream removed, broadcasting: 1 I0418 00:21:55.057306 7 log.go:172] (0xc002c338c0) (0xc00111f9a0) Stream removed, broadcasting: 3 I0418 00:21:55.057326 7 log.go:172] (0xc002c338c0) (0xc0012b0460) Stream removed, broadcasting: 5 Apr 18 00:21:55.057: INFO: Found all expected endpoints: [netserver-0] Apr 18 00:21:55.061: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.220 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2337 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:21:55.061: INFO: >>> kubeConfig: /root/.kube/config I0418 00:21:55.088474 7 log.go:172] (0xc002f0c4d0) (0xc00116fe00) Create stream I0418 00:21:55.088514 7 log.go:172] (0xc002f0c4d0) (0xc00116fe00) Stream added, broadcasting: 1 I0418 00:21:55.090605 7 log.go:172] (0xc002f0c4d0) Reply frame received for 1 I0418 00:21:55.090647 7 log.go:172] (0xc002f0c4d0) (0xc00116ff40) Create stream I0418 00:21:55.090674 7 log.go:172] (0xc002f0c4d0) (0xc00116ff40) Stream added, broadcasting: 3 I0418 00:21:55.091543 7 log.go:172] (0xc002f0c4d0) Reply frame received for 3 I0418 00:21:55.091571 7 log.go:172] (0xc002f0c4d0) (0xc0012b0780) Create stream I0418 00:21:55.091587 7 log.go:172] (0xc002f0c4d0) (0xc0012b0780) Stream added, broadcasting: 5 I0418 00:21:55.092515 7 log.go:172] (0xc002f0c4d0) Reply frame received for 5 I0418 00:21:56.155934 7 log.go:172] (0xc002f0c4d0) Data frame received for 5 I0418 00:21:56.155984 7 log.go:172] (0xc0012b0780) (5) Data frame handling I0418 00:21:56.156040 7 log.go:172] (0xc002f0c4d0) Data frame received for 3 I0418 00:21:56.156064 7 log.go:172] (0xc00116ff40) (3) Data frame handling I0418 00:21:56.156088 7 log.go:172] (0xc00116ff40) (3) Data frame sent I0418 00:21:56.156116 7 log.go:172] (0xc002f0c4d0) Data frame received for 3 I0418 00:21:56.156134 7 log.go:172] (0xc00116ff40) (3) Data frame handling I0418 00:21:56.158147 7 log.go:172] (0xc002f0c4d0) Data frame received for 1 I0418 00:21:56.158174 7 log.go:172] (0xc00116fe00) (1) Data frame handling I0418 00:21:56.158193 7 log.go:172] (0xc00116fe00) (1) Data frame sent I0418 00:21:56.158219 7 log.go:172] (0xc002f0c4d0) (0xc00116fe00) Stream removed, broadcasting: 1 I0418 00:21:56.158250 7 log.go:172] (0xc002f0c4d0) Go away received I0418 00:21:56.158425 7 log.go:172] (0xc002f0c4d0) (0xc00116fe00) Stream removed, broadcasting: 1 I0418 00:21:56.158462 7 log.go:172] (0xc002f0c4d0) (0xc00116ff40) Stream removed, broadcasting: 3 I0418 00:21:56.158494 7 log.go:172] (0xc002f0c4d0) (0xc0012b0780) Stream removed, broadcasting: 5 Apr 18 00:21:56.158: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:21:56.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2337" for this suite. • [SLOW TEST:30.453 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":2703,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:21:56.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:21:56.243: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7a7dced-ca92-474d-ba08-1ada8c8c5ba5" in namespace "downward-api-6538" to be "Succeeded or Failed" Apr 18 00:21:56.251: INFO: Pod "downwardapi-volume-b7a7dced-ca92-474d-ba08-1ada8c8c5ba5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.851082ms Apr 18 00:21:58.255: INFO: Pod "downwardapi-volume-b7a7dced-ca92-474d-ba08-1ada8c8c5ba5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011655345s Apr 18 00:22:00.259: INFO: Pod "downwardapi-volume-b7a7dced-ca92-474d-ba08-1ada8c8c5ba5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016090388s STEP: Saw pod success Apr 18 00:22:00.259: INFO: Pod "downwardapi-volume-b7a7dced-ca92-474d-ba08-1ada8c8c5ba5" satisfied condition "Succeeded or Failed" Apr 18 00:22:00.262: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b7a7dced-ca92-474d-ba08-1ada8c8c5ba5 container client-container: STEP: delete the pod Apr 18 00:22:00.294: INFO: Waiting for pod downwardapi-volume-b7a7dced-ca92-474d-ba08-1ada8c8c5ba5 to disappear Apr 18 00:22:00.299: INFO: Pod downwardapi-volume-b7a7dced-ca92-474d-ba08-1ada8c8c5ba5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:22:00.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6538" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2749,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:22:00.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 18 00:22:00.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4251' Apr 18 00:22:00.686: INFO: stderr: "" Apr 18 00:22:00.686: INFO: stdout: "pod/pause created\n" Apr 18 00:22:00.686: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 18 00:22:00.687: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4251" to be "running and ready" Apr 18 00:22:00.700: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.64527ms Apr 18 00:22:02.715: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027926066s Apr 18 00:22:04.719: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.032211202s Apr 18 00:22:04.719: INFO: Pod "pause" satisfied condition "running and ready" Apr 18 00:22:04.719: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 18 00:22:04.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4251' Apr 18 00:22:04.822: INFO: stderr: "" Apr 18 00:22:04.822: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 18 00:22:04.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4251' Apr 18 00:22:04.903: INFO: stderr: "" Apr 18 00:22:04.903: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 18 00:22:04.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4251' Apr 18 00:22:05.003: INFO: stderr: "" Apr 18 00:22:05.003: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 18 00:22:05.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4251' Apr 18 00:22:05.089: INFO: stderr: "" Apr 18 00:22:05.089: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 18 00:22:05.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4251' Apr 18 00:22:05.204: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 18 00:22:05.204: INFO: stdout: "pod \"pause\" force deleted\n" Apr 18 00:22:05.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4251' Apr 18 00:22:05.306: INFO: stderr: "No resources found in kubectl-4251 namespace.\n" Apr 18 00:22:05.306: INFO: stdout: "" Apr 18 00:22:05.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4251 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 18 00:22:05.434: INFO: stderr: "" Apr 18 00:22:05.434: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:22:05.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4251" for this suite. • [SLOW TEST:5.191 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":171,"skipped":2757,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:22:05.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 18 00:22:12.273: INFO: Successfully updated pod "adopt-release-4jb4j" STEP: Checking that the Job readopts the Pod Apr 18 00:22:12.273: INFO: Waiting up to 15m0s for pod "adopt-release-4jb4j" in namespace "job-1807" to be "adopted" Apr 18 00:22:12.296: INFO: Pod "adopt-release-4jb4j": Phase="Running", Reason="", readiness=true. Elapsed: 23.230335ms Apr 18 00:22:14.301: INFO: Pod "adopt-release-4jb4j": Phase="Running", Reason="", readiness=true. Elapsed: 2.027746612s Apr 18 00:22:14.301: INFO: Pod "adopt-release-4jb4j" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 18 00:22:14.809: INFO: Successfully updated pod "adopt-release-4jb4j" STEP: Checking that the Job releases the Pod Apr 18 00:22:14.809: INFO: Waiting up to 15m0s for pod "adopt-release-4jb4j" in namespace "job-1807" to be "released" Apr 18 00:22:14.814: INFO: Pod "adopt-release-4jb4j": Phase="Running", Reason="", readiness=true. Elapsed: 4.71687ms Apr 18 00:22:16.818: INFO: Pod "adopt-release-4jb4j": Phase="Running", Reason="", readiness=true. Elapsed: 2.008402628s Apr 18 00:22:16.818: INFO: Pod "adopt-release-4jb4j" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:22:16.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1807" for this suite. • [SLOW TEST:11.330 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":172,"skipped":2768,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:22:16.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:22:17.192: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:22:19.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766137, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766137, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766137, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766137, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:22:22.228: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:22:22.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:22:23.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6311" for this suite. STEP: Destroying namespace "webhook-6311-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.738 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":173,"skipped":2807,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:22:23.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-1e91870d-e029-4736-adfe-e7e67b7a8fa7 STEP: Creating a pod to test consume configMaps Apr 18 00:22:23.696: INFO: Waiting up to 5m0s for pod "pod-configmaps-15b5887f-96db-408c-a7e0-b56fff6b892a" in namespace "configmap-4393" to be "Succeeded or Failed" Apr 18 00:22:23.931: INFO: Pod "pod-configmaps-15b5887f-96db-408c-a7e0-b56fff6b892a": Phase="Pending", Reason="", readiness=false. Elapsed: 234.586882ms Apr 18 00:22:25.934: INFO: Pod "pod-configmaps-15b5887f-96db-408c-a7e0-b56fff6b892a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238036186s Apr 18 00:22:27.953: INFO: Pod "pod-configmaps-15b5887f-96db-408c-a7e0-b56fff6b892a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.257068596s STEP: Saw pod success Apr 18 00:22:27.953: INFO: Pod "pod-configmaps-15b5887f-96db-408c-a7e0-b56fff6b892a" satisfied condition "Succeeded or Failed" Apr 18 00:22:27.973: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-15b5887f-96db-408c-a7e0-b56fff6b892a container configmap-volume-test: STEP: delete the pod Apr 18 00:22:27.990: INFO: Waiting for pod pod-configmaps-15b5887f-96db-408c-a7e0-b56fff6b892a to disappear Apr 18 00:22:27.994: INFO: Pod pod-configmaps-15b5887f-96db-408c-a7e0-b56fff6b892a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:22:27.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4393" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2808,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:22:28.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 18 00:22:33.180: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:22:33.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9816" for this suite. • [SLOW TEST:5.514 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":175,"skipped":2834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:22:33.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:22:33.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52b8cace-d217-4ed6-a963-549b38e98c76" in namespace "downward-api-4187" to be "Succeeded or Failed" Apr 18 00:22:33.719: INFO: Pod "downwardapi-volume-52b8cace-d217-4ed6-a963-549b38e98c76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.973209ms Apr 18 00:22:35.775: INFO: Pod "downwardapi-volume-52b8cace-d217-4ed6-a963-549b38e98c76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05898484s Apr 18 00:22:37.779: INFO: Pod "downwardapi-volume-52b8cace-d217-4ed6-a963-549b38e98c76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062691763s STEP: Saw pod success Apr 18 00:22:37.779: INFO: Pod "downwardapi-volume-52b8cace-d217-4ed6-a963-549b38e98c76" satisfied condition "Succeeded or Failed" Apr 18 00:22:37.782: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-52b8cace-d217-4ed6-a963-549b38e98c76 container client-container: STEP: delete the pod Apr 18 00:22:37.849: INFO: Waiting for pod downwardapi-volume-52b8cace-d217-4ed6-a963-549b38e98c76 to disappear Apr 18 00:22:37.889: INFO: Pod downwardapi-volume-52b8cace-d217-4ed6-a963-549b38e98c76 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:22:37.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4187" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2883,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:22:37.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-8ec7721a-0218-4d04-aa1c-f06193ffeb15 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:22:46.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8050" for this suite. • [SLOW TEST:8.159 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":2886,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:22:46.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:22:46.129: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9183bff-da07-4ec2-85c3-3f167094f9ea" in namespace "projected-1327" to be "Succeeded or Failed" Apr 18 00:22:46.152: INFO: Pod "downwardapi-volume-d9183bff-da07-4ec2-85c3-3f167094f9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 22.456183ms Apr 18 00:22:48.156: INFO: Pod "downwardapi-volume-d9183bff-da07-4ec2-85c3-3f167094f9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027111446s Apr 18 00:22:50.160: INFO: Pod "downwardapi-volume-d9183bff-da07-4ec2-85c3-3f167094f9ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030484428s STEP: Saw pod success Apr 18 00:22:50.160: INFO: Pod "downwardapi-volume-d9183bff-da07-4ec2-85c3-3f167094f9ea" satisfied condition "Succeeded or Failed" Apr 18 00:22:50.162: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d9183bff-da07-4ec2-85c3-3f167094f9ea container client-container: STEP: delete the pod Apr 18 00:22:50.189: INFO: Waiting for pod downwardapi-volume-d9183bff-da07-4ec2-85c3-3f167094f9ea to disappear Apr 18 00:22:50.218: INFO: Pod downwardapi-volume-d9183bff-da07-4ec2-85c3-3f167094f9ea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:22:50.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1327" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":2969,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:22:50.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:22:51.148: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:22:53.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766171, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766171, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766171, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766171, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:22:56.537: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:22:56.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6322" for this suite. STEP: Destroying namespace "webhook-6322-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.860 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":179,"skipped":2970,"failed":0} [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:22:57.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 18 00:22:57.165: INFO: Waiting up to 5m0s for pod "downward-api-dd34fef3-a4f3-4a5b-89c5-49eb07bf9f34" in namespace "downward-api-5137" to be "Succeeded or Failed" Apr 18 00:22:57.182: INFO: Pod "downward-api-dd34fef3-a4f3-4a5b-89c5-49eb07bf9f34": Phase="Pending", Reason="", readiness=false. Elapsed: 17.461896ms Apr 18 00:22:59.573: INFO: Pod "downward-api-dd34fef3-a4f3-4a5b-89c5-49eb07bf9f34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407870392s Apr 18 00:23:01.576: INFO: Pod "downward-api-dd34fef3-a4f3-4a5b-89c5-49eb07bf9f34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.410824212s STEP: Saw pod success Apr 18 00:23:01.576: INFO: Pod "downward-api-dd34fef3-a4f3-4a5b-89c5-49eb07bf9f34" satisfied condition "Succeeded or Failed" Apr 18 00:23:01.578: INFO: Trying to get logs from node latest-worker2 pod downward-api-dd34fef3-a4f3-4a5b-89c5-49eb07bf9f34 container dapi-container: STEP: delete the pod Apr 18 00:23:01.628: INFO: Waiting for pod downward-api-dd34fef3-a4f3-4a5b-89c5-49eb07bf9f34 to disappear Apr 18 00:23:01.636: INFO: Pod downward-api-dd34fef3-a4f3-4a5b-89c5-49eb07bf9f34 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:23:01.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5137" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":2970,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:23:01.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 18 00:23:01.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:23:01.756: INFO: Number of nodes with available pods: 0 Apr 18 00:23:01.756: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:23:02.819: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:23:02.830: INFO: Number of nodes with available pods: 0 Apr 18 00:23:02.830: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:23:03.836: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:23:03.840: INFO: Number of nodes with available pods: 0 Apr 18 00:23:03.840: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:23:04.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:23:04.782: INFO: Number of nodes with available pods: 0 Apr 18 00:23:04.782: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:23:05.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:23:05.763: INFO: Number of nodes with available pods: 2 Apr 18 00:23:05.763: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 18 00:23:05.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:23:05.781: INFO: Number of nodes with available pods: 2 Apr 18 00:23:05.782: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6995, will wait for the garbage collector to delete the pods Apr 18 00:23:06.883: INFO: Deleting DaemonSet.extensions daemon-set took: 25.262421ms Apr 18 00:23:07.383: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.290422ms Apr 18 00:24:42.787: INFO: Number of nodes with available pods: 0 Apr 18 00:24:42.787: INFO: Number of running nodes: 0, number of available pods: 0 Apr 18 00:24:42.790: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6995/daemonsets","resourceVersion":"8936452"},"items":null} Apr 18 00:24:42.793: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6995/pods","resourceVersion":"8936452"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:24:42.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6995" for this suite. • [SLOW TEST:101.166 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":181,"skipped":2976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:24:42.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-196f02df-4d2a-43f1-870e-8297b6ee0e53 STEP: Creating a pod to test consume secrets Apr 18 00:24:42.892: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e4a1a000-1e02-4155-aa9f-755db884c88b" in namespace "projected-2905" to be "Succeeded or Failed" Apr 18 00:24:42.895: INFO: Pod "pod-projected-secrets-e4a1a000-1e02-4155-aa9f-755db884c88b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.259403ms Apr 18 00:24:44.899: INFO: Pod "pod-projected-secrets-e4a1a000-1e02-4155-aa9f-755db884c88b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007435111s Apr 18 00:24:46.903: INFO: Pod "pod-projected-secrets-e4a1a000-1e02-4155-aa9f-755db884c88b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011269663s STEP: Saw pod success Apr 18 00:24:46.903: INFO: Pod "pod-projected-secrets-e4a1a000-1e02-4155-aa9f-755db884c88b" satisfied condition "Succeeded or Failed" Apr 18 00:24:46.906: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-e4a1a000-1e02-4155-aa9f-755db884c88b container projected-secret-volume-test: STEP: delete the pod Apr 18 00:24:46.949: INFO: Waiting for pod pod-projected-secrets-e4a1a000-1e02-4155-aa9f-755db884c88b to disappear Apr 18 00:24:46.963: INFO: Pod pod-projected-secrets-e4a1a000-1e02-4155-aa9f-755db884c88b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:24:46.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2905" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3023,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:24:46.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:24:47.024: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 18 00:24:48.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7214 create -f -' Apr 18 00:24:52.643: INFO: stderr: "" Apr 18 00:24:52.643: INFO: stdout: "e2e-test-crd-publish-openapi-9535-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 18 00:24:52.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7214 delete e2e-test-crd-publish-openapi-9535-crds test-cr' Apr 18 00:24:52.732: INFO: stderr: "" Apr 18 00:24:52.732: INFO: stdout: "e2e-test-crd-publish-openapi-9535-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 18 00:24:52.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7214 apply -f -' Apr 18 00:24:52.978: INFO: stderr: "" Apr 18 00:24:52.978: INFO: stdout: "e2e-test-crd-publish-openapi-9535-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 18 00:24:52.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7214 delete e2e-test-crd-publish-openapi-9535-crds test-cr' Apr 18 00:24:53.080: INFO: stderr: "" Apr 18 00:24:53.080: INFO: stdout: "e2e-test-crd-publish-openapi-9535-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 18 00:24:53.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9535-crds' Apr 18 00:24:53.300: INFO: stderr: "" Apr 18 00:24:53.300: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9535-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:24:56.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7214" for this suite. • [SLOW TEST:9.255 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":183,"skipped":3033,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:24:56.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 18 00:24:56.318: INFO: Waiting up to 5m0s for pod "client-containers-a5617f82-8245-4765-8adb-706dcf078a66" in namespace "containers-4258" to be "Succeeded or Failed" Apr 18 00:24:56.322: INFO: Pod "client-containers-a5617f82-8245-4765-8adb-706dcf078a66": Phase="Pending", Reason="", readiness=false. Elapsed: 3.885296ms Apr 18 00:24:58.326: INFO: Pod "client-containers-a5617f82-8245-4765-8adb-706dcf078a66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008062713s Apr 18 00:25:00.331: INFO: Pod "client-containers-a5617f82-8245-4765-8adb-706dcf078a66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012515344s STEP: Saw pod success Apr 18 00:25:00.331: INFO: Pod "client-containers-a5617f82-8245-4765-8adb-706dcf078a66" satisfied condition "Succeeded or Failed" Apr 18 00:25:00.334: INFO: Trying to get logs from node latest-worker pod client-containers-a5617f82-8245-4765-8adb-706dcf078a66 container test-container: STEP: delete the pod Apr 18 00:25:00.443: INFO: Waiting for pod client-containers-a5617f82-8245-4765-8adb-706dcf078a66 to disappear Apr 18 00:25:00.461: INFO: Pod client-containers-a5617f82-8245-4765-8adb-706dcf078a66 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:25:00.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4258" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3087,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:25:00.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-a78be63b-d70b-4e7c-8613-811e0f0cc45f in namespace container-probe-8416 Apr 18 00:25:04.541: INFO: Started pod liveness-a78be63b-d70b-4e7c-8613-811e0f0cc45f in namespace container-probe-8416 STEP: checking the pod's current state and verifying that restartCount is present Apr 18 00:25:04.544: INFO: Initial restart count of pod liveness-a78be63b-d70b-4e7c-8613-811e0f0cc45f is 0 Apr 18 00:25:24.588: INFO: Restart count of pod container-probe-8416/liveness-a78be63b-d70b-4e7c-8613-811e0f0cc45f is now 1 (20.044237145s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:25:24.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8416" for this suite. • [SLOW TEST:24.140 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3108,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:25:24.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-c5315a23-850c-4e19-a0dd-fe9c2c9eece3 STEP: Creating a pod to test consume configMaps Apr 18 00:25:24.719: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6467f9f4-a3e4-4b23-8c9e-1058e09f84a7" in namespace "projected-7854" to be "Succeeded or Failed" Apr 18 00:25:24.730: INFO: Pod "pod-projected-configmaps-6467f9f4-a3e4-4b23-8c9e-1058e09f84a7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.540055ms Apr 18 00:25:26.748: INFO: Pod "pod-projected-configmaps-6467f9f4-a3e4-4b23-8c9e-1058e09f84a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029271047s Apr 18 00:25:28.752: INFO: Pod "pod-projected-configmaps-6467f9f4-a3e4-4b23-8c9e-1058e09f84a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033350281s STEP: Saw pod success Apr 18 00:25:28.752: INFO: Pod "pod-projected-configmaps-6467f9f4-a3e4-4b23-8c9e-1058e09f84a7" satisfied condition "Succeeded or Failed" Apr 18 00:25:28.756: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-6467f9f4-a3e4-4b23-8c9e-1058e09f84a7 container projected-configmap-volume-test: STEP: delete the pod Apr 18 00:25:28.803: INFO: Waiting for pod pod-projected-configmaps-6467f9f4-a3e4-4b23-8c9e-1058e09f84a7 to disappear Apr 18 00:25:28.831: INFO: Pod pod-projected-configmaps-6467f9f4-a3e4-4b23-8c9e-1058e09f84a7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:25:28.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7854" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3122,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:25:28.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:25:28.919: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a3dd323c-9020-4885-9b1a-590772988044" in namespace "security-context-test-2433" to be "Succeeded or Failed" Apr 18 00:25:28.922: INFO: Pod "busybox-privileged-false-a3dd323c-9020-4885-9b1a-590772988044": Phase="Pending", Reason="", readiness=false. Elapsed: 2.894121ms Apr 18 00:25:30.925: INFO: Pod "busybox-privileged-false-a3dd323c-9020-4885-9b1a-590772988044": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006045302s Apr 18 00:25:32.929: INFO: Pod "busybox-privileged-false-a3dd323c-9020-4885-9b1a-590772988044": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010341833s Apr 18 00:25:32.929: INFO: Pod "busybox-privileged-false-a3dd323c-9020-4885-9b1a-590772988044" satisfied condition "Succeeded or Failed" Apr 18 00:25:32.936: INFO: Got logs for pod "busybox-privileged-false-a3dd323c-9020-4885-9b1a-590772988044": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:25:32.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2433" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3126,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:25:32.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 18 00:25:32.985: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:25:41.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7006" for this suite. • [SLOW TEST:8.546 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":188,"skipped":3129,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:25:41.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:25:42.021: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:25:44.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766342, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766342, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766342, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766342, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:25:46.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766342, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766342, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766342, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766342, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:25:49.096: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:26:01.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9833" for this suite. STEP: Destroying namespace "webhook-9833-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.874 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":189,"skipped":3142,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:26:01.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:26:02.012: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:26:04.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766362, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766362, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766362, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766361, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:26:07.051: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:26:07.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9505" for this suite. STEP: Destroying namespace "webhook-9505-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.887 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":190,"skipped":3142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:26:07.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-82ec7c6e-25e0-4270-82cf-1a3fe5d02f79 STEP: Creating a pod to test consume secrets Apr 18 00:26:07.377: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5da76040-7634-4332-bcc4-f62863fa454f" in namespace "projected-8813" to be "Succeeded or Failed" Apr 18 00:26:07.382: INFO: Pod "pod-projected-secrets-5da76040-7634-4332-bcc4-f62863fa454f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.606034ms Apr 18 00:26:09.419: INFO: Pod "pod-projected-secrets-5da76040-7634-4332-bcc4-f62863fa454f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04202653s Apr 18 00:26:11.423: INFO: Pod "pod-projected-secrets-5da76040-7634-4332-bcc4-f62863fa454f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046050914s STEP: Saw pod success Apr 18 00:26:11.423: INFO: Pod "pod-projected-secrets-5da76040-7634-4332-bcc4-f62863fa454f" satisfied condition "Succeeded or Failed" Apr 18 00:26:11.426: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-5da76040-7634-4332-bcc4-f62863fa454f container projected-secret-volume-test: STEP: delete the pod Apr 18 00:26:11.486: INFO: Waiting for pod pod-projected-secrets-5da76040-7634-4332-bcc4-f62863fa454f to disappear Apr 18 00:26:11.490: INFO: Pod pod-projected-secrets-5da76040-7634-4332-bcc4-f62863fa454f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:26:11.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8813" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3184,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:26:11.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-cb83746a-5593-49c9-a00e-b82187fda04e STEP: Creating a pod to test consume configMaps Apr 18 00:26:11.559: INFO: Waiting up to 5m0s for pod "pod-configmaps-8dba9c8d-4a7d-424d-ba0e-23afc174ce71" in namespace "configmap-1365" to be "Succeeded or Failed" Apr 18 00:26:11.562: INFO: Pod "pod-configmaps-8dba9c8d-4a7d-424d-ba0e-23afc174ce71": Phase="Pending", Reason="", readiness=false. Elapsed: 3.054296ms Apr 18 00:26:13.587: INFO: Pod "pod-configmaps-8dba9c8d-4a7d-424d-ba0e-23afc174ce71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027451579s Apr 18 00:26:15.591: INFO: Pod "pod-configmaps-8dba9c8d-4a7d-424d-ba0e-23afc174ce71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031655786s STEP: Saw pod success Apr 18 00:26:15.591: INFO: Pod "pod-configmaps-8dba9c8d-4a7d-424d-ba0e-23afc174ce71" satisfied condition "Succeeded or Failed" Apr 18 00:26:15.594: INFO: Trying to get logs from node latest-worker pod pod-configmaps-8dba9c8d-4a7d-424d-ba0e-23afc174ce71 container configmap-volume-test: STEP: delete the pod Apr 18 00:26:15.613: INFO: Waiting for pod pod-configmaps-8dba9c8d-4a7d-424d-ba0e-23afc174ce71 to disappear Apr 18 00:26:15.616: INFO: Pod pod-configmaps-8dba9c8d-4a7d-424d-ba0e-23afc174ce71 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:26:15.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1365" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3197,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:26:15.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:26:15.671: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:26:16.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9073" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":193,"skipped":3212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:26:16.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 18 00:26:20.854: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:26:20.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8597" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":194,"skipped":3262,"failed":0} ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:26:20.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 18 00:26:20.973: INFO: PodSpec: initContainers in spec.initContainers Apr 18 00:27:10.113: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a1eda9cb-f280-4794-b33a-758dbaec0acf", GenerateName:"", Namespace:"init-container-5274", SelfLink:"/api/v1/namespaces/init-container-5274/pods/pod-init-a1eda9cb-f280-4794-b33a-758dbaec0acf", UID:"ea9b99e3-f051-4587-8da7-d9097d0abe1e", ResourceVersion:"8937349", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722766380, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"973843526"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9xj7j", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005ce4000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9xj7j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9xj7j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9xj7j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002e0e0a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b96000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002e0e1b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002e0e1d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002e0e1d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002e0e1dc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766381, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766381, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766381, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722766380, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.237", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.237"}}, StartTime:(*v1.Time)(0xc00314c040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000b962a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000b96310)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://64d13b7172d1248a9b945688a10c1bfb28365cf361586d1906a073e6498b82e6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00314c080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00314c060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002e0e2bf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:27:10.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5274" for this suite. • [SLOW TEST:49.226 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":195,"skipped":3262,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:27:10.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2261.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2261.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 18 00:27:16.401: INFO: DNS probes using dns-2261/dns-test-e61e0942-8b50-4ec0-99a1-1f29134147a0 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:27:16.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2261" for this suite. • [SLOW TEST:6.324 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":196,"skipped":3273,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:27:16.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-78j9 STEP: Creating a pod to test atomic-volume-subpath Apr 18 00:27:18.184: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-78j9" in namespace "subpath-7347" to be "Succeeded or Failed" Apr 18 00:27:18.225: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Pending", Reason="", readiness=false. Elapsed: 41.27647ms Apr 18 00:27:20.230: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045667445s Apr 18 00:27:22.233: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Running", Reason="", readiness=true. Elapsed: 4.049424962s Apr 18 00:27:24.238: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Running", Reason="", readiness=true. Elapsed: 6.054175685s Apr 18 00:27:26.242: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Running", Reason="", readiness=true. Elapsed: 8.058007012s Apr 18 00:27:28.245: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Running", Reason="", readiness=true. Elapsed: 10.061430835s Apr 18 00:27:30.249: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Running", Reason="", readiness=true. Elapsed: 12.064872447s Apr 18 00:27:32.252: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Running", Reason="", readiness=true. Elapsed: 14.068066284s Apr 18 00:27:34.283: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Running", Reason="", readiness=true. Elapsed: 16.09855436s Apr 18 00:27:36.286: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Running", Reason="", readiness=true. Elapsed: 18.101951148s Apr 18 00:27:38.290: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Running", Reason="", readiness=true. Elapsed: 20.105799952s Apr 18 00:27:40.295: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Running", Reason="", readiness=true. Elapsed: 22.110561625s Apr 18 00:27:42.300: INFO: Pod "pod-subpath-test-secret-78j9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.116129917s STEP: Saw pod success Apr 18 00:27:42.300: INFO: Pod "pod-subpath-test-secret-78j9" satisfied condition "Succeeded or Failed" Apr 18 00:27:42.302: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-78j9 container test-container-subpath-secret-78j9: STEP: delete the pod Apr 18 00:27:42.334: INFO: Waiting for pod pod-subpath-test-secret-78j9 to disappear Apr 18 00:27:42.351: INFO: Pod pod-subpath-test-secret-78j9 no longer exists STEP: Deleting pod pod-subpath-test-secret-78j9 Apr 18 00:27:42.351: INFO: Deleting pod "pod-subpath-test-secret-78j9" in namespace "subpath-7347" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:27:42.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7347" for this suite. • [SLOW TEST:25.885 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":197,"skipped":3324,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:27:42.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:27:42.449: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:27:46.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8869" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:27:46.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:27:46.697: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 4.326254ms) Apr 18 00:27:46.704: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 6.80738ms) Apr 18 00:27:46.710: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 6.159221ms) Apr 18 00:27:46.713: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.767259ms) Apr 18 00:27:46.716: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.605434ms) Apr 18 00:27:46.719: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.848619ms) Apr 18 00:27:46.721: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.762074ms) Apr 18 00:27:46.724: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.904779ms) Apr 18 00:27:46.728: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.326793ms) Apr 18 00:27:46.731: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.832643ms) Apr 18 00:27:46.733: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.755426ms) Apr 18 00:27:46.736: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.465164ms) Apr 18 00:27:46.739: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.317449ms) Apr 18 00:27:46.742: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.788445ms) Apr 18 00:27:46.745: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.959216ms) Apr 18 00:27:46.748: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.738168ms) Apr 18 00:27:46.751: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.293848ms) Apr 18 00:27:46.755: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.364607ms) Apr 18 00:27:46.758: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.153864ms) Apr 18 00:27:46.761: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.439734ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:27:46.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8950" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":199,"skipped":3370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:27:46.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 18 00:27:46.859: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:27:53.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5794" for this suite. • [SLOW TEST:7.025 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":200,"skipped":3395,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:27:53.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 18 00:27:53.884: INFO: Waiting up to 5m0s for pod "pod-0774fe2f-e605-4114-9bf9-df17720fbb3c" in namespace "emptydir-5946" to be "Succeeded or Failed" Apr 18 00:27:53.888: INFO: Pod "pod-0774fe2f-e605-4114-9bf9-df17720fbb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.89574ms Apr 18 00:27:55.892: INFO: Pod "pod-0774fe2f-e605-4114-9bf9-df17720fbb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007758464s Apr 18 00:27:57.900: INFO: Pod "pod-0774fe2f-e605-4114-9bf9-df17720fbb3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015919925s STEP: Saw pod success Apr 18 00:27:57.900: INFO: Pod "pod-0774fe2f-e605-4114-9bf9-df17720fbb3c" satisfied condition "Succeeded or Failed" Apr 18 00:27:57.903: INFO: Trying to get logs from node latest-worker pod pod-0774fe2f-e605-4114-9bf9-df17720fbb3c container test-container: STEP: delete the pod Apr 18 00:27:58.053: INFO: Waiting for pod pod-0774fe2f-e605-4114-9bf9-df17720fbb3c to disappear Apr 18 00:27:58.058: INFO: Pod pod-0774fe2f-e605-4114-9bf9-df17720fbb3c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:27:58.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5946" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3406,"failed":0} ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:27:58.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:28:02.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4265" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3406,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:28:02.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 18 00:28:02.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2845' Apr 18 00:28:02.291: INFO: stderr: "" Apr 18 00:28:02.292: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 18 00:28:07.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2845 -o json' Apr 18 00:28:07.468: INFO: stderr: "" Apr 18 00:28:07.468: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-18T00:28:02Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2845\",\n \"resourceVersion\": \"8937707\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2845/pods/e2e-test-httpd-pod\",\n \"uid\": \"0a27d268-d65e-4dc6-93ac-dce86f9e383c\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-nxbcb\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-nxbcb\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-nxbcb\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-18T00:28:02Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-18T00:28:04Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-18T00:28:04Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-18T00:28:02Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://c42829e905ec346ccb69f2cabea99920f70e078aeafc7a6eed38fc63f08b9854\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-18T00:28:04Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.35\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.35\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-18T00:28:02Z\"\n }\n}\n" STEP: replace the image in the pod Apr 18 00:28:07.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2845' Apr 18 00:28:07.777: INFO: stderr: "" Apr 18 00:28:07.777: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 18 00:28:07.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2845' Apr 18 00:28:22.756: INFO: stderr: "" Apr 18 00:28:22.756: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:28:22.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2845" for this suite. • [SLOW TEST:20.622 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":203,"skipped":3417,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:28:22.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:28:22.842: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4968 I0418 00:28:22.857005 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4968, replica count: 1 I0418 00:28:23.907464 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0418 00:28:24.907746 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0418 00:28:25.907976 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 18 00:28:26.055: INFO: Created: latency-svc-d4vlx Apr 18 00:28:26.062: INFO: Got endpoints: latency-svc-d4vlx [54.36298ms] Apr 18 00:28:26.090: INFO: Created: latency-svc-fshjb Apr 18 00:28:26.095: INFO: Got endpoints: latency-svc-fshjb [32.624787ms] Apr 18 00:28:26.115: INFO: Created: latency-svc-2ftdx Apr 18 00:28:26.131: INFO: Got endpoints: latency-svc-2ftdx [68.77342ms] Apr 18 00:28:26.146: INFO: Created: latency-svc-9lcxk Apr 18 00:28:26.155: INFO: Got endpoints: latency-svc-9lcxk [92.475809ms] Apr 18 00:28:26.200: INFO: Created: latency-svc-2k7jc Apr 18 00:28:26.209: INFO: Got endpoints: latency-svc-2k7jc [146.480363ms] Apr 18 00:28:26.223: INFO: Created: latency-svc-8cqbk Apr 18 00:28:26.233: INFO: Got endpoints: latency-svc-8cqbk [170.729141ms] Apr 18 00:28:26.246: INFO: Created: latency-svc-5475g Apr 18 00:28:26.258: INFO: Got endpoints: latency-svc-5475g [195.242674ms] Apr 18 00:28:26.275: INFO: Created: latency-svc-hvcmd Apr 18 00:28:26.287: INFO: Got endpoints: latency-svc-hvcmd [224.30566ms] Apr 18 00:28:26.337: INFO: Created: latency-svc-sz898 Apr 18 00:28:26.360: INFO: Created: latency-svc-wcg4g Apr 18 00:28:26.360: INFO: Got endpoints: latency-svc-sz898 [296.948939ms] Apr 18 00:28:26.380: INFO: Got endpoints: latency-svc-wcg4g [317.615175ms] Apr 18 00:28:26.397: INFO: Created: latency-svc-f7sh5 Apr 18 00:28:26.410: INFO: Got endpoints: latency-svc-f7sh5 [347.118553ms] Apr 18 00:28:26.427: INFO: Created: latency-svc-n8nwq Apr 18 00:28:26.469: INFO: Got endpoints: latency-svc-n8nwq [406.621903ms] Apr 18 00:28:26.475: INFO: Created: latency-svc-t2wxv Apr 18 00:28:26.488: INFO: Got endpoints: latency-svc-t2wxv [424.705526ms] Apr 18 00:28:26.509: INFO: Created: latency-svc-zhglv Apr 18 00:28:26.524: INFO: Got endpoints: latency-svc-zhglv [460.874557ms] Apr 18 00:28:26.539: INFO: Created: latency-svc-5p6hc Apr 18 00:28:26.547: INFO: Got endpoints: latency-svc-5p6hc [484.539749ms] Apr 18 00:28:26.608: INFO: Created: latency-svc-2qrqp Apr 18 00:28:26.630: INFO: Got endpoints: latency-svc-2qrqp [566.410141ms] Apr 18 00:28:26.631: INFO: Created: latency-svc-hmsg9 Apr 18 00:28:26.647: INFO: Got endpoints: latency-svc-hmsg9 [551.391576ms] Apr 18 00:28:26.679: INFO: Created: latency-svc-z4bth Apr 18 00:28:26.701: INFO: Got endpoints: latency-svc-z4bth [569.7446ms] Apr 18 00:28:26.751: INFO: Created: latency-svc-7bp24 Apr 18 00:28:26.754: INFO: Got endpoints: latency-svc-7bp24 [599.247044ms] Apr 18 00:28:26.773: INFO: Created: latency-svc-qhcqv Apr 18 00:28:26.785: INFO: Got endpoints: latency-svc-qhcqv [575.566082ms] Apr 18 00:28:26.803: INFO: Created: latency-svc-hrtj7 Apr 18 00:28:26.825: INFO: Got endpoints: latency-svc-hrtj7 [592.223254ms] Apr 18 00:28:26.840: INFO: Created: latency-svc-fqjql Apr 18 00:28:26.900: INFO: Got endpoints: latency-svc-fqjql [642.675003ms] Apr 18 00:28:26.902: INFO: Created: latency-svc-c5clm Apr 18 00:28:26.907: INFO: Got endpoints: latency-svc-c5clm [620.037088ms] Apr 18 00:28:26.925: INFO: Created: latency-svc-9k9hk Apr 18 00:28:26.955: INFO: Got endpoints: latency-svc-9k9hk [595.670058ms] Apr 18 00:28:26.979: INFO: Created: latency-svc-9c4rh Apr 18 00:28:26.991: INFO: Got endpoints: latency-svc-9c4rh [610.98059ms] Apr 18 00:28:27.032: INFO: Created: latency-svc-hcrhn Apr 18 00:28:27.055: INFO: Got endpoints: latency-svc-hcrhn [645.157521ms] Apr 18 00:28:27.056: INFO: Created: latency-svc-zz5r5 Apr 18 00:28:27.069: INFO: Got endpoints: latency-svc-zz5r5 [600.037384ms] Apr 18 00:28:27.085: INFO: Created: latency-svc-5ppbp Apr 18 00:28:27.099: INFO: Got endpoints: latency-svc-5ppbp [611.628788ms] Apr 18 00:28:27.164: INFO: Created: latency-svc-nt44b Apr 18 00:28:27.189: INFO: Got endpoints: latency-svc-nt44b [665.448921ms] Apr 18 00:28:27.190: INFO: Created: latency-svc-pgjzn Apr 18 00:28:27.197: INFO: Got endpoints: latency-svc-pgjzn [649.783558ms] Apr 18 00:28:27.213: INFO: Created: latency-svc-cssf5 Apr 18 00:28:27.222: INFO: Got endpoints: latency-svc-cssf5 [591.810162ms] Apr 18 00:28:27.237: INFO: Created: latency-svc-7sgrl Apr 18 00:28:27.245: INFO: Got endpoints: latency-svc-7sgrl [598.838114ms] Apr 18 00:28:27.259: INFO: Created: latency-svc-w5rms Apr 18 00:28:27.295: INFO: Got endpoints: latency-svc-w5rms [594.557562ms] Apr 18 00:28:27.307: INFO: Created: latency-svc-r45cg Apr 18 00:28:27.324: INFO: Got endpoints: latency-svc-r45cg [569.46885ms] Apr 18 00:28:27.343: INFO: Created: latency-svc-h2fb8 Apr 18 00:28:27.360: INFO: Got endpoints: latency-svc-h2fb8 [575.466962ms] Apr 18 00:28:27.379: INFO: Created: latency-svc-wrtc9 Apr 18 00:28:27.433: INFO: Got endpoints: latency-svc-wrtc9 [607.629639ms] Apr 18 00:28:27.434: INFO: Created: latency-svc-6qgtr Apr 18 00:28:27.440: INFO: Got endpoints: latency-svc-6qgtr [539.658164ms] Apr 18 00:28:27.464: INFO: Created: latency-svc-tm464 Apr 18 00:28:27.477: INFO: Got endpoints: latency-svc-tm464 [570.127245ms] Apr 18 00:28:27.494: INFO: Created: latency-svc-fskg5 Apr 18 00:28:27.507: INFO: Got endpoints: latency-svc-fskg5 [551.721919ms] Apr 18 00:28:27.524: INFO: Created: latency-svc-x49rx Apr 18 00:28:27.553: INFO: Got endpoints: latency-svc-x49rx [561.165198ms] Apr 18 00:28:27.565: INFO: Created: latency-svc-ps7ps Apr 18 00:28:27.584: INFO: Got endpoints: latency-svc-ps7ps [529.082876ms] Apr 18 00:28:27.601: INFO: Created: latency-svc-gpg4x Apr 18 00:28:27.626: INFO: Got endpoints: latency-svc-gpg4x [556.099492ms] Apr 18 00:28:27.684: INFO: Created: latency-svc-5q4l8 Apr 18 00:28:27.689: INFO: Got endpoints: latency-svc-5q4l8 [589.184693ms] Apr 18 00:28:27.705: INFO: Created: latency-svc-jmmlh Apr 18 00:28:27.713: INFO: Got endpoints: latency-svc-jmmlh [523.572306ms] Apr 18 00:28:27.729: INFO: Created: latency-svc-86rsx Apr 18 00:28:27.736: INFO: Got endpoints: latency-svc-86rsx [539.13913ms] Apr 18 00:28:27.759: INFO: Created: latency-svc-ntnc8 Apr 18 00:28:27.835: INFO: Got endpoints: latency-svc-ntnc8 [613.054186ms] Apr 18 00:28:27.847: INFO: Created: latency-svc-mbtmp Apr 18 00:28:27.877: INFO: Got endpoints: latency-svc-mbtmp [631.792574ms] Apr 18 00:28:27.920: INFO: Created: latency-svc-9q587 Apr 18 00:28:27.985: INFO: Got endpoints: latency-svc-9q587 [689.647171ms] Apr 18 00:28:27.991: INFO: Created: latency-svc-jtr28 Apr 18 00:28:28.021: INFO: Created: latency-svc-ts4p8 Apr 18 00:28:28.023: INFO: Got endpoints: latency-svc-jtr28 [698.932966ms] Apr 18 00:28:28.060: INFO: Got endpoints: latency-svc-ts4p8 [699.666988ms] Apr 18 00:28:28.110: INFO: Created: latency-svc-vxqpm Apr 18 00:28:28.118: INFO: Got endpoints: latency-svc-vxqpm [684.709917ms] Apr 18 00:28:28.165: INFO: Created: latency-svc-rz7cs Apr 18 00:28:28.177: INFO: Got endpoints: latency-svc-rz7cs [737.252196ms] Apr 18 00:28:28.248: INFO: Created: latency-svc-5kjq7 Apr 18 00:28:28.268: INFO: Created: latency-svc-v6qw8 Apr 18 00:28:28.268: INFO: Got endpoints: latency-svc-5kjq7 [791.178222ms] Apr 18 00:28:28.279: INFO: Got endpoints: latency-svc-v6qw8 [772.167553ms] Apr 18 00:28:28.305: INFO: Created: latency-svc-snw27 Apr 18 00:28:28.316: INFO: Got endpoints: latency-svc-snw27 [763.639501ms] Apr 18 00:28:28.347: INFO: Created: latency-svc-g222b Apr 18 00:28:28.388: INFO: Got endpoints: latency-svc-g222b [803.300157ms] Apr 18 00:28:28.424: INFO: Created: latency-svc-cblzk Apr 18 00:28:28.438: INFO: Got endpoints: latency-svc-cblzk [811.825365ms] Apr 18 00:28:28.465: INFO: Created: latency-svc-rwxwz Apr 18 00:28:28.480: INFO: Got endpoints: latency-svc-rwxwz [790.974988ms] Apr 18 00:28:28.523: INFO: Created: latency-svc-j5txw Apr 18 00:28:28.540: INFO: Got endpoints: latency-svc-j5txw [826.963124ms] Apr 18 00:28:28.556: INFO: Created: latency-svc-6nmhx Apr 18 00:28:28.570: INFO: Got endpoints: latency-svc-6nmhx [833.273672ms] Apr 18 00:28:28.599: INFO: Created: latency-svc-ph24s Apr 18 00:28:28.611: INFO: Got endpoints: latency-svc-ph24s [776.770647ms] Apr 18 00:28:28.679: INFO: Created: latency-svc-28g7q Apr 18 00:28:28.708: INFO: Got endpoints: latency-svc-28g7q [830.349688ms] Apr 18 00:28:28.747: INFO: Created: latency-svc-2b9tv Apr 18 00:28:28.758: INFO: Got endpoints: latency-svc-2b9tv [773.22343ms] Apr 18 00:28:28.811: INFO: Created: latency-svc-pnkhw Apr 18 00:28:28.832: INFO: Got endpoints: latency-svc-pnkhw [809.566252ms] Apr 18 00:28:28.833: INFO: Created: latency-svc-7ddhc Apr 18 00:28:28.848: INFO: Got endpoints: latency-svc-7ddhc [788.224085ms] Apr 18 00:28:28.898: INFO: Created: latency-svc-nl7zz Apr 18 00:28:28.936: INFO: Got endpoints: latency-svc-nl7zz [818.723612ms] Apr 18 00:28:28.950: INFO: Created: latency-svc-kjhj2 Apr 18 00:28:28.962: INFO: Got endpoints: latency-svc-kjhj2 [784.715391ms] Apr 18 00:28:28.980: INFO: Created: latency-svc-4ctp2 Apr 18 00:28:29.005: INFO: Got endpoints: latency-svc-4ctp2 [736.329342ms] Apr 18 00:28:29.036: INFO: Created: latency-svc-c8lb5 Apr 18 00:28:29.070: INFO: Got endpoints: latency-svc-c8lb5 [790.737233ms] Apr 18 00:28:29.096: INFO: Created: latency-svc-mvb9c Apr 18 00:28:29.109: INFO: Got endpoints: latency-svc-mvb9c [792.464792ms] Apr 18 00:28:29.200: INFO: Created: latency-svc-6gbpx Apr 18 00:28:29.220: INFO: Got endpoints: latency-svc-6gbpx [832.288611ms] Apr 18 00:28:29.221: INFO: Created: latency-svc-d2jmp Apr 18 00:28:29.235: INFO: Got endpoints: latency-svc-d2jmp [796.981612ms] Apr 18 00:28:29.256: INFO: Created: latency-svc-lh8kt Apr 18 00:28:29.271: INFO: Got endpoints: latency-svc-lh8kt [790.957219ms] Apr 18 00:28:29.338: INFO: Created: latency-svc-5gngz Apr 18 00:28:29.342: INFO: Got endpoints: latency-svc-5gngz [802.407657ms] Apr 18 00:28:29.365: INFO: Created: latency-svc-tbg9k Apr 18 00:28:29.378: INFO: Got endpoints: latency-svc-tbg9k [808.436094ms] Apr 18 00:28:29.402: INFO: Created: latency-svc-zfvxq Apr 18 00:28:29.414: INFO: Got endpoints: latency-svc-zfvxq [802.935912ms] Apr 18 00:28:29.481: INFO: Created: latency-svc-gdzjl Apr 18 00:28:29.496: INFO: Created: latency-svc-8j2mr Apr 18 00:28:29.496: INFO: Got endpoints: latency-svc-gdzjl [788.484252ms] Apr 18 00:28:29.507: INFO: Got endpoints: latency-svc-8j2mr [748.654091ms] Apr 18 00:28:29.526: INFO: Created: latency-svc-h6mh4 Apr 18 00:28:29.538: INFO: Got endpoints: latency-svc-h6mh4 [705.373617ms] Apr 18 00:28:29.564: INFO: Created: latency-svc-t9rvx Apr 18 00:28:29.580: INFO: Got endpoints: latency-svc-t9rvx [731.530106ms] Apr 18 00:28:29.619: INFO: Created: latency-svc-t7gbp Apr 18 00:28:29.622: INFO: Got endpoints: latency-svc-t7gbp [685.68885ms] Apr 18 00:28:29.648: INFO: Created: latency-svc-fm99m Apr 18 00:28:29.664: INFO: Got endpoints: latency-svc-fm99m [701.431913ms] Apr 18 00:28:29.751: INFO: Created: latency-svc-qsdlx Apr 18 00:28:29.754: INFO: Got endpoints: latency-svc-qsdlx [749.394405ms] Apr 18 00:28:29.792: INFO: Created: latency-svc-kdww2 Apr 18 00:28:29.810: INFO: Got endpoints: latency-svc-kdww2 [739.801455ms] Apr 18 00:28:29.900: INFO: Created: latency-svc-w5ms6 Apr 18 00:28:29.922: INFO: Got endpoints: latency-svc-w5ms6 [813.440823ms] Apr 18 00:28:29.923: INFO: Created: latency-svc-xgttt Apr 18 00:28:29.935: INFO: Got endpoints: latency-svc-xgttt [715.404459ms] Apr 18 00:28:29.952: INFO: Created: latency-svc-4pl74 Apr 18 00:28:29.976: INFO: Got endpoints: latency-svc-4pl74 [741.194856ms] Apr 18 00:28:30.038: INFO: Created: latency-svc-fxcsv Apr 18 00:28:30.043: INFO: Got endpoints: latency-svc-fxcsv [772.604993ms] Apr 18 00:28:30.061: INFO: Created: latency-svc-smvx9 Apr 18 00:28:30.073: INFO: Got endpoints: latency-svc-smvx9 [730.985434ms] Apr 18 00:28:30.097: INFO: Created: latency-svc-mzcgq Apr 18 00:28:30.128: INFO: Got endpoints: latency-svc-mzcgq [749.268318ms] Apr 18 00:28:30.168: INFO: Created: latency-svc-6qf4x Apr 18 00:28:30.184: INFO: Got endpoints: latency-svc-6qf4x [769.621922ms] Apr 18 00:28:30.205: INFO: Created: latency-svc-sztg5 Apr 18 00:28:30.214: INFO: Got endpoints: latency-svc-sztg5 [717.869626ms] Apr 18 00:28:30.246: INFO: Created: latency-svc-t7264 Apr 18 00:28:30.271: INFO: Got endpoints: latency-svc-t7264 [764.036556ms] Apr 18 00:28:30.295: INFO: Created: latency-svc-jgnpj Apr 18 00:28:30.311: INFO: Got endpoints: latency-svc-jgnpj [772.999595ms] Apr 18 00:28:30.354: INFO: Created: latency-svc-dwdxs Apr 18 00:28:30.364: INFO: Got endpoints: latency-svc-dwdxs [784.369994ms] Apr 18 00:28:30.444: INFO: Created: latency-svc-8htb6 Apr 18 00:28:30.460: INFO: Got endpoints: latency-svc-8htb6 [838.24087ms] Apr 18 00:28:30.485: INFO: Created: latency-svc-4qcc5 Apr 18 00:28:30.517: INFO: Got endpoints: latency-svc-4qcc5 [852.981245ms] Apr 18 00:28:30.546: INFO: Created: latency-svc-r8qr4 Apr 18 00:28:30.559: INFO: Got endpoints: latency-svc-r8qr4 [804.422404ms] Apr 18 00:28:30.649: INFO: Created: latency-svc-dqxfx Apr 18 00:28:30.655: INFO: Got endpoints: latency-svc-dqxfx [845.402635ms] Apr 18 00:28:30.689: INFO: Created: latency-svc-dpxdc Apr 18 00:28:30.714: INFO: Got endpoints: latency-svc-dpxdc [792.053721ms] Apr 18 00:28:30.781: INFO: Created: latency-svc-jlh76 Apr 18 00:28:30.791: INFO: Got endpoints: latency-svc-jlh76 [855.780878ms] Apr 18 00:28:30.823: INFO: Created: latency-svc-xpggk Apr 18 00:28:30.834: INFO: Got endpoints: latency-svc-xpggk [858.405161ms] Apr 18 00:28:30.854: INFO: Created: latency-svc-gh4vz Apr 18 00:28:30.864: INFO: Got endpoints: latency-svc-gh4vz [820.673355ms] Apr 18 00:28:30.918: INFO: Created: latency-svc-dwzjj Apr 18 00:28:30.927: INFO: Got endpoints: latency-svc-dwzjj [853.646094ms] Apr 18 00:28:30.954: INFO: Created: latency-svc-hxc59 Apr 18 00:28:30.963: INFO: Got endpoints: latency-svc-hxc59 [835.281269ms] Apr 18 00:28:30.983: INFO: Created: latency-svc-cg7js Apr 18 00:28:30.993: INFO: Got endpoints: latency-svc-cg7js [808.946648ms] Apr 18 00:28:31.014: INFO: Created: latency-svc-9dvr4 Apr 18 00:28:31.044: INFO: Got endpoints: latency-svc-9dvr4 [829.662749ms] Apr 18 00:28:31.063: INFO: Created: latency-svc-cp8wf Apr 18 00:28:31.087: INFO: Got endpoints: latency-svc-cp8wf [815.79648ms] Apr 18 00:28:31.135: INFO: Created: latency-svc-clrpx Apr 18 00:28:31.158: INFO: Got endpoints: latency-svc-clrpx [847.254127ms] Apr 18 00:28:31.177: INFO: Created: latency-svc-sv2np Apr 18 00:28:31.191: INFO: Got endpoints: latency-svc-sv2np [826.946768ms] Apr 18 00:28:31.211: INFO: Created: latency-svc-l9vfr Apr 18 00:28:31.235: INFO: Got endpoints: latency-svc-l9vfr [774.697938ms] Apr 18 00:28:31.283: INFO: Created: latency-svc-cvfcp Apr 18 00:28:31.309: INFO: Got endpoints: latency-svc-cvfcp [792.254746ms] Apr 18 00:28:31.312: INFO: Created: latency-svc-fbt5f Apr 18 00:28:31.325: INFO: Got endpoints: latency-svc-fbt5f [766.68916ms] Apr 18 00:28:31.345: INFO: Created: latency-svc-kqlv7 Apr 18 00:28:31.363: INFO: Got endpoints: latency-svc-kqlv7 [707.309072ms] Apr 18 00:28:31.411: INFO: Created: latency-svc-bmb2m Apr 18 00:28:31.427: INFO: Got endpoints: latency-svc-bmb2m [712.97784ms] Apr 18 00:28:31.439: INFO: Created: latency-svc-mvkr6 Apr 18 00:28:31.451: INFO: Got endpoints: latency-svc-mvkr6 [660.099904ms] Apr 18 00:28:31.469: INFO: Created: latency-svc-2p6p7 Apr 18 00:28:31.481: INFO: Got endpoints: latency-svc-2p6p7 [646.948604ms] Apr 18 00:28:31.499: INFO: Created: latency-svc-xwnt7 Apr 18 00:28:31.557: INFO: Got endpoints: latency-svc-xwnt7 [693.091872ms] Apr 18 00:28:31.559: INFO: Created: latency-svc-2d4kv Apr 18 00:28:31.562: INFO: Got endpoints: latency-svc-2d4kv [635.038501ms] Apr 18 00:28:31.597: INFO: Created: latency-svc-97tbh Apr 18 00:28:31.610: INFO: Got endpoints: latency-svc-97tbh [647.314129ms] Apr 18 00:28:31.640: INFO: Created: latency-svc-jnpbj Apr 18 00:28:31.697: INFO: Got endpoints: latency-svc-jnpbj [703.369182ms] Apr 18 00:28:31.699: INFO: Created: latency-svc-w6mpp Apr 18 00:28:31.707: INFO: Got endpoints: latency-svc-w6mpp [662.658668ms] Apr 18 00:28:31.728: INFO: Created: latency-svc-kmrpr Apr 18 00:28:31.737: INFO: Got endpoints: latency-svc-kmrpr [649.921502ms] Apr 18 00:28:31.769: INFO: Created: latency-svc-v2whn Apr 18 00:28:31.787: INFO: Got endpoints: latency-svc-v2whn [628.72419ms] Apr 18 00:28:31.841: INFO: Created: latency-svc-5pkm8 Apr 18 00:28:31.867: INFO: Created: latency-svc-6fg6k Apr 18 00:28:31.867: INFO: Got endpoints: latency-svc-5pkm8 [675.482783ms] Apr 18 00:28:31.903: INFO: Got endpoints: latency-svc-6fg6k [667.721328ms] Apr 18 00:28:31.972: INFO: Created: latency-svc-5wqfx Apr 18 00:28:31.979: INFO: Got endpoints: latency-svc-5wqfx [669.524954ms] Apr 18 00:28:32.003: INFO: Created: latency-svc-5vfws Apr 18 00:28:32.014: INFO: Got endpoints: latency-svc-5vfws [688.794354ms] Apr 18 00:28:32.039: INFO: Created: latency-svc-4dhsv Apr 18 00:28:32.056: INFO: Got endpoints: latency-svc-4dhsv [693.646824ms] Apr 18 00:28:32.104: INFO: Created: latency-svc-h7knn Apr 18 00:28:32.110: INFO: Got endpoints: latency-svc-h7knn [682.930953ms] Apr 18 00:28:32.131: INFO: Created: latency-svc-cv7gn Apr 18 00:28:32.146: INFO: Got endpoints: latency-svc-cv7gn [694.806304ms] Apr 18 00:28:32.160: INFO: Created: latency-svc-d8r7m Apr 18 00:28:32.174: INFO: Got endpoints: latency-svc-d8r7m [692.423946ms] Apr 18 00:28:32.190: INFO: Created: latency-svc-58vsh Apr 18 00:28:32.223: INFO: Got endpoints: latency-svc-58vsh [665.81569ms] Apr 18 00:28:32.237: INFO: Created: latency-svc-6qmmr Apr 18 00:28:32.252: INFO: Got endpoints: latency-svc-6qmmr [689.471039ms] Apr 18 00:28:32.273: INFO: Created: latency-svc-xtxqh Apr 18 00:28:32.282: INFO: Got endpoints: latency-svc-xtxqh [671.491715ms] Apr 18 00:28:32.299: INFO: Created: latency-svc-9lhq2 Apr 18 00:28:32.306: INFO: Got endpoints: latency-svc-9lhq2 [609.184789ms] Apr 18 00:28:32.322: INFO: Created: latency-svc-b8lm8 Apr 18 00:28:32.356: INFO: Got endpoints: latency-svc-b8lm8 [649.489234ms] Apr 18 00:28:32.364: INFO: Created: latency-svc-bz9xz Apr 18 00:28:32.374: INFO: Got endpoints: latency-svc-bz9xz [636.82807ms] Apr 18 00:28:32.407: INFO: Created: latency-svc-hv89b Apr 18 00:28:32.416: INFO: Got endpoints: latency-svc-hv89b [628.837378ms] Apr 18 00:28:32.505: INFO: Created: latency-svc-qmdzp Apr 18 00:28:32.526: INFO: Created: latency-svc-w8dwx Apr 18 00:28:32.526: INFO: Got endpoints: latency-svc-qmdzp [659.052353ms] Apr 18 00:28:32.542: INFO: Got endpoints: latency-svc-w8dwx [638.717086ms] Apr 18 00:28:32.569: INFO: Created: latency-svc-hvz7f Apr 18 00:28:32.584: INFO: Got endpoints: latency-svc-hvz7f [605.228346ms] Apr 18 00:28:32.655: INFO: Created: latency-svc-v825m Apr 18 00:28:32.682: INFO: Got endpoints: latency-svc-v825m [668.053469ms] Apr 18 00:28:32.683: INFO: Created: latency-svc-29f8w Apr 18 00:28:32.710: INFO: Got endpoints: latency-svc-29f8w [653.357299ms] Apr 18 00:28:32.747: INFO: Created: latency-svc-jlc2z Apr 18 00:28:32.774: INFO: Got endpoints: latency-svc-jlc2z [663.88256ms] Apr 18 00:28:32.789: INFO: Created: latency-svc-4mzjt Apr 18 00:28:32.804: INFO: Got endpoints: latency-svc-4mzjt [657.350877ms] Apr 18 00:28:32.819: INFO: Created: latency-svc-nhh5z Apr 18 00:28:32.833: INFO: Got endpoints: latency-svc-nhh5z [659.261036ms] Apr 18 00:28:32.849: INFO: Created: latency-svc-dl8vz Apr 18 00:28:32.863: INFO: Got endpoints: latency-svc-dl8vz [639.953858ms] Apr 18 00:28:32.900: INFO: Created: latency-svc-n8srh Apr 18 00:28:32.916: INFO: Got endpoints: latency-svc-n8srh [664.566495ms] Apr 18 00:28:32.917: INFO: Created: latency-svc-wl8jb Apr 18 00:28:32.929: INFO: Got endpoints: latency-svc-wl8jb [647.079818ms] Apr 18 00:28:32.987: INFO: Created: latency-svc-rcscv Apr 18 00:28:33.020: INFO: Got endpoints: latency-svc-rcscv [714.095138ms] Apr 18 00:28:33.035: INFO: Created: latency-svc-7hd2v Apr 18 00:28:33.051: INFO: Got endpoints: latency-svc-7hd2v [694.873194ms] Apr 18 00:28:33.071: INFO: Created: latency-svc-mmfsf Apr 18 00:28:33.088: INFO: Got endpoints: latency-svc-mmfsf [713.898294ms] Apr 18 00:28:33.106: INFO: Created: latency-svc-bbxt2 Apr 18 00:28:33.140: INFO: Got endpoints: latency-svc-bbxt2 [723.860545ms] Apr 18 00:28:33.150: INFO: Created: latency-svc-z8d2z Apr 18 00:28:33.165: INFO: Got endpoints: latency-svc-z8d2z [638.925523ms] Apr 18 00:28:33.210: INFO: Created: latency-svc-pt797 Apr 18 00:28:33.219: INFO: Got endpoints: latency-svc-pt797 [676.933348ms] Apr 18 00:28:33.239: INFO: Created: latency-svc-xtr22 Apr 18 00:28:33.271: INFO: Got endpoints: latency-svc-xtr22 [687.378988ms] Apr 18 00:28:33.287: INFO: Created: latency-svc-22zsz Apr 18 00:28:33.311: INFO: Got endpoints: latency-svc-22zsz [628.218743ms] Apr 18 00:28:33.335: INFO: Created: latency-svc-lc6hb Apr 18 00:28:33.361: INFO: Got endpoints: latency-svc-lc6hb [650.865262ms] Apr 18 00:28:33.427: INFO: Created: latency-svc-5qstg Apr 18 00:28:33.444: INFO: Got endpoints: latency-svc-5qstg [669.560701ms] Apr 18 00:28:33.444: INFO: Created: latency-svc-cdfbk Apr 18 00:28:33.468: INFO: Got endpoints: latency-svc-cdfbk [664.529337ms] Apr 18 00:28:33.498: INFO: Created: latency-svc-48qb9 Apr 18 00:28:33.510: INFO: Got endpoints: latency-svc-48qb9 [677.020286ms] Apr 18 00:28:33.527: INFO: Created: latency-svc-js8kt Apr 18 00:28:33.557: INFO: Got endpoints: latency-svc-js8kt [693.805917ms] Apr 18 00:28:33.568: INFO: Created: latency-svc-s6xss Apr 18 00:28:33.593: INFO: Got endpoints: latency-svc-s6xss [676.228492ms] Apr 18 00:28:33.629: INFO: Created: latency-svc-9nd55 Apr 18 00:28:33.645: INFO: Got endpoints: latency-svc-9nd55 [715.588373ms] Apr 18 00:28:33.684: INFO: Created: latency-svc-c6p2v Apr 18 00:28:33.692: INFO: Got endpoints: latency-svc-c6p2v [671.853984ms] Apr 18 00:28:33.707: INFO: Created: latency-svc-p5wdx Apr 18 00:28:33.739: INFO: Got endpoints: latency-svc-p5wdx [688.286865ms] Apr 18 00:28:33.773: INFO: Created: latency-svc-q452p Apr 18 00:28:33.816: INFO: Got endpoints: latency-svc-q452p [728.029078ms] Apr 18 00:28:33.844: INFO: Created: latency-svc-rwlgv Apr 18 00:28:33.866: INFO: Got endpoints: latency-svc-rwlgv [726.27088ms] Apr 18 00:28:33.882: INFO: Created: latency-svc-w6f5d Apr 18 00:28:33.906: INFO: Got endpoints: latency-svc-w6f5d [740.978074ms] Apr 18 00:28:33.948: INFO: Created: latency-svc-c79sw Apr 18 00:28:33.956: INFO: Got endpoints: latency-svc-c79sw [736.849239ms] Apr 18 00:28:34.007: INFO: Created: latency-svc-cxfkh Apr 18 00:28:34.020: INFO: Got endpoints: latency-svc-cxfkh [748.216373ms] Apr 18 00:28:34.036: INFO: Created: latency-svc-67gst Apr 18 00:28:34.062: INFO: Got endpoints: latency-svc-67gst [750.794322ms] Apr 18 00:28:34.073: INFO: Created: latency-svc-pwcl2 Apr 18 00:28:34.085: INFO: Got endpoints: latency-svc-pwcl2 [724.33559ms] Apr 18 00:28:34.108: INFO: Created: latency-svc-lr7m8 Apr 18 00:28:34.121: INFO: Got endpoints: latency-svc-lr7m8 [677.133384ms] Apr 18 00:28:34.140: INFO: Created: latency-svc-5gw4n Apr 18 00:28:34.152: INFO: Got endpoints: latency-svc-5gw4n [683.380255ms] Apr 18 00:28:34.182: INFO: Created: latency-svc-45cht Apr 18 00:28:34.200: INFO: Created: latency-svc-2fl62 Apr 18 00:28:34.200: INFO: Got endpoints: latency-svc-45cht [689.487832ms] Apr 18 00:28:34.213: INFO: Got endpoints: latency-svc-2fl62 [656.567022ms] Apr 18 00:28:34.230: INFO: Created: latency-svc-bggpq Apr 18 00:28:34.243: INFO: Got endpoints: latency-svc-bggpq [650.665021ms] Apr 18 00:28:34.264: INFO: Created: latency-svc-vm74d Apr 18 00:28:34.313: INFO: Got endpoints: latency-svc-vm74d [668.645857ms] Apr 18 00:28:34.332: INFO: Created: latency-svc-tq65c Apr 18 00:28:34.351: INFO: Got endpoints: latency-svc-tq65c [659.361177ms] Apr 18 00:28:34.405: INFO: Created: latency-svc-6kx9p Apr 18 00:28:34.445: INFO: Got endpoints: latency-svc-6kx9p [705.432747ms] Apr 18 00:28:34.456: INFO: Created: latency-svc-vz9rs Apr 18 00:28:34.471: INFO: Got endpoints: latency-svc-vz9rs [654.915543ms] Apr 18 00:28:34.494: INFO: Created: latency-svc-898mw Apr 18 00:28:34.507: INFO: Got endpoints: latency-svc-898mw [641.12962ms] Apr 18 00:28:34.589: INFO: Created: latency-svc-2h8hf Apr 18 00:28:34.594: INFO: Got endpoints: latency-svc-2h8hf [688.337011ms] Apr 18 00:28:34.613: INFO: Created: latency-svc-6sl67 Apr 18 00:28:34.625: INFO: Got endpoints: latency-svc-6sl67 [669.168715ms] Apr 18 00:28:34.643: INFO: Created: latency-svc-grddz Apr 18 00:28:34.655: INFO: Got endpoints: latency-svc-grddz [635.009891ms] Apr 18 00:28:34.732: INFO: Created: latency-svc-2tcnh Apr 18 00:28:34.771: INFO: Got endpoints: latency-svc-2tcnh [708.97651ms] Apr 18 00:28:34.771: INFO: Created: latency-svc-9hxfr Apr 18 00:28:34.800: INFO: Got endpoints: latency-svc-9hxfr [715.219791ms] Apr 18 00:28:34.831: INFO: Created: latency-svc-8ww77 Apr 18 00:28:35.140: INFO: Got endpoints: latency-svc-8ww77 [1.018876805s] Apr 18 00:28:35.143: INFO: Created: latency-svc-z8zht Apr 18 00:28:35.697: INFO: Got endpoints: latency-svc-z8zht [1.545670134s] Apr 18 00:28:35.722: INFO: Created: latency-svc-rbcv2 Apr 18 00:28:35.747: INFO: Got endpoints: latency-svc-rbcv2 [1.547204349s] Apr 18 00:28:35.773: INFO: Created: latency-svc-2prk7 Apr 18 00:28:35.783: INFO: Got endpoints: latency-svc-2prk7 [1.569557172s] Apr 18 00:28:35.835: INFO: Created: latency-svc-x9gl9 Apr 18 00:28:35.855: INFO: Got endpoints: latency-svc-x9gl9 [1.611364006s] Apr 18 00:28:35.857: INFO: Created: latency-svc-65k8g Apr 18 00:28:35.878: INFO: Got endpoints: latency-svc-65k8g [1.564915881s] Apr 18 00:28:35.897: INFO: Created: latency-svc-mgszf Apr 18 00:28:35.908: INFO: Got endpoints: latency-svc-mgszf [1.557000809s] Apr 18 00:28:35.927: INFO: Created: latency-svc-n8sw8 Apr 18 00:28:35.961: INFO: Got endpoints: latency-svc-n8sw8 [1.515910878s] Apr 18 00:28:35.974: INFO: Created: latency-svc-f6fhs Apr 18 00:28:35.999: INFO: Got endpoints: latency-svc-f6fhs [1.527752137s] Apr 18 00:28:36.146: INFO: Created: latency-svc-5sfwj Apr 18 00:28:36.171: INFO: Created: latency-svc-lnlb9 Apr 18 00:28:36.171: INFO: Got endpoints: latency-svc-5sfwj [1.664059331s] Apr 18 00:28:36.196: INFO: Got endpoints: latency-svc-lnlb9 [1.601328361s] Apr 18 00:28:36.214: INFO: Created: latency-svc-hhgt7 Apr 18 00:28:36.224: INFO: Got endpoints: latency-svc-hhgt7 [1.598768948s] Apr 18 00:28:36.224: INFO: Latencies: [32.624787ms 68.77342ms 92.475809ms 146.480363ms 170.729141ms 195.242674ms 224.30566ms 296.948939ms 317.615175ms 347.118553ms 406.621903ms 424.705526ms 460.874557ms 484.539749ms 523.572306ms 529.082876ms 539.13913ms 539.658164ms 551.391576ms 551.721919ms 556.099492ms 561.165198ms 566.410141ms 569.46885ms 569.7446ms 570.127245ms 575.466962ms 575.566082ms 589.184693ms 591.810162ms 592.223254ms 594.557562ms 595.670058ms 598.838114ms 599.247044ms 600.037384ms 605.228346ms 607.629639ms 609.184789ms 610.98059ms 611.628788ms 613.054186ms 620.037088ms 628.218743ms 628.72419ms 628.837378ms 631.792574ms 635.009891ms 635.038501ms 636.82807ms 638.717086ms 638.925523ms 639.953858ms 641.12962ms 642.675003ms 645.157521ms 646.948604ms 647.079818ms 647.314129ms 649.489234ms 649.783558ms 649.921502ms 650.665021ms 650.865262ms 653.357299ms 654.915543ms 656.567022ms 657.350877ms 659.052353ms 659.261036ms 659.361177ms 660.099904ms 662.658668ms 663.88256ms 664.529337ms 664.566495ms 665.448921ms 665.81569ms 667.721328ms 668.053469ms 668.645857ms 669.168715ms 669.524954ms 669.560701ms 671.491715ms 671.853984ms 675.482783ms 676.228492ms 676.933348ms 677.020286ms 677.133384ms 682.930953ms 683.380255ms 684.709917ms 685.68885ms 687.378988ms 688.286865ms 688.337011ms 688.794354ms 689.471039ms 689.487832ms 689.647171ms 692.423946ms 693.091872ms 693.646824ms 693.805917ms 694.806304ms 694.873194ms 698.932966ms 699.666988ms 701.431913ms 703.369182ms 705.373617ms 705.432747ms 707.309072ms 708.97651ms 712.97784ms 713.898294ms 714.095138ms 715.219791ms 715.404459ms 715.588373ms 717.869626ms 723.860545ms 724.33559ms 726.27088ms 728.029078ms 730.985434ms 731.530106ms 736.329342ms 736.849239ms 737.252196ms 739.801455ms 740.978074ms 741.194856ms 748.216373ms 748.654091ms 749.268318ms 749.394405ms 750.794322ms 763.639501ms 764.036556ms 766.68916ms 769.621922ms 772.167553ms 772.604993ms 772.999595ms 773.22343ms 774.697938ms 776.770647ms 784.369994ms 784.715391ms 788.224085ms 788.484252ms 790.737233ms 790.957219ms 790.974988ms 791.178222ms 792.053721ms 792.254746ms 792.464792ms 796.981612ms 802.407657ms 802.935912ms 803.300157ms 804.422404ms 808.436094ms 808.946648ms 809.566252ms 811.825365ms 813.440823ms 815.79648ms 818.723612ms 820.673355ms 826.946768ms 826.963124ms 829.662749ms 830.349688ms 832.288611ms 833.273672ms 835.281269ms 838.24087ms 845.402635ms 847.254127ms 852.981245ms 853.646094ms 855.780878ms 858.405161ms 1.018876805s 1.515910878s 1.527752137s 1.545670134s 1.547204349s 1.557000809s 1.564915881s 1.569557172s 1.598768948s 1.601328361s 1.611364006s 1.664059331s] Apr 18 00:28:36.224: INFO: 50 %ile: 689.487832ms Apr 18 00:28:36.224: INFO: 90 %ile: 835.281269ms Apr 18 00:28:36.224: INFO: 99 %ile: 1.611364006s Apr 18 00:28:36.224: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:28:36.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4968" for this suite. • [SLOW TEST:13.466 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":204,"skipped":3429,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:28:36.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:28:36.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08cbd7bc-5810-47b8-8578-c53dd243fb2d" in namespace "projected-4241" to be "Succeeded or Failed" Apr 18 00:28:36.326: INFO: Pod "downwardapi-volume-08cbd7bc-5810-47b8-8578-c53dd243fb2d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.390917ms Apr 18 00:28:38.439: INFO: Pod "downwardapi-volume-08cbd7bc-5810-47b8-8578-c53dd243fb2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126837955s Apr 18 00:28:40.444: INFO: Pod "downwardapi-volume-08cbd7bc-5810-47b8-8578-c53dd243fb2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131783524s STEP: Saw pod success Apr 18 00:28:40.444: INFO: Pod "downwardapi-volume-08cbd7bc-5810-47b8-8578-c53dd243fb2d" satisfied condition "Succeeded or Failed" Apr 18 00:28:40.448: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-08cbd7bc-5810-47b8-8578-c53dd243fb2d container client-container: STEP: delete the pod Apr 18 00:28:40.513: INFO: Waiting for pod downwardapi-volume-08cbd7bc-5810-47b8-8578-c53dd243fb2d to disappear Apr 18 00:28:40.517: INFO: Pod downwardapi-volume-08cbd7bc-5810-47b8-8578-c53dd243fb2d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:28:40.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4241" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3437,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:28:40.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 18 00:28:40.571: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 00:28:40.592: INFO: Waiting for terminating namespaces to be deleted... Apr 18 00:28:40.594: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 18 00:28:40.599: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:28:40.599: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 00:28:40.599: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:28:40.599: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 00:28:40.599: INFO: svc-latency-rc-cq9bz from svc-latency-4968 started at 2020-04-18 00:28:22 +0000 UTC (1 container statuses recorded) Apr 18 00:28:40.599: INFO: Container svc-latency-rc ready: true, restart count 0 Apr 18 00:28:40.599: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 18 00:28:40.604: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:28:40.604: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 00:28:40.604: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:28:40.604: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 00:28:40.604: INFO: busybox-readonly-fs556cf71b-79cb-4b83-92d2-281376ca902a from kubelet-test-4265 started at 2020-04-18 00:27:58 +0000 UTC (1 container statuses recorded) Apr 18 00:28:40.604: INFO: Container busybox-readonly-fs556cf71b-79cb-4b83-92d2-281376ca902a ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1d08f143-ee46-4613-9d97-d52cc9445696 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-1d08f143-ee46-4613-9d97-d52cc9445696 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-1d08f143-ee46-4613-9d97-d52cc9445696 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:33:48.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3786" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.428 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":206,"skipped":3449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:33:48.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5171 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5171;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5171 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5171;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5171.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5171.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5171.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5171.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5171.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5171.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5171.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5171.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5171.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5171.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5171.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.146_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5171 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5171;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5171 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5171;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5171.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5171.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5171.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5171.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5171.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5171.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5171.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5171.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5171.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5171.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5171.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5171.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.146_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 18 00:33:55.167: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.170: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.172: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.175: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.180: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.183: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.186: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.189: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.208: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.210: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.212: INFO: Unable to read jessie_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.216: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.218: INFO: Unable to read jessie_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.221: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.223: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.225: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:33:55.240: INFO: Lookups using dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5171 wheezy_tcp@dns-test-service.dns-5171 wheezy_udp@dns-test-service.dns-5171.svc wheezy_tcp@dns-test-service.dns-5171.svc wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5171 jessie_tcp@dns-test-service.dns-5171 jessie_udp@dns-test-service.dns-5171.svc jessie_tcp@dns-test-service.dns-5171.svc jessie_udp@_http._tcp.dns-test-service.dns-5171.svc jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc] Apr 18 00:34:00.246: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.251: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.255: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.258: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.260: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.264: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.267: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.271: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.293: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.296: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.298: INFO: Unable to read jessie_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.301: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.304: INFO: Unable to read jessie_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.306: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.309: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.312: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:00.328: INFO: Lookups using dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5171 wheezy_tcp@dns-test-service.dns-5171 wheezy_udp@dns-test-service.dns-5171.svc wheezy_tcp@dns-test-service.dns-5171.svc wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5171 jessie_tcp@dns-test-service.dns-5171 jessie_udp@dns-test-service.dns-5171.svc jessie_tcp@dns-test-service.dns-5171.svc jessie_udp@_http._tcp.dns-test-service.dns-5171.svc jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc] Apr 18 00:34:05.245: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.249: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.252: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.255: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.258: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.260: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.263: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.265: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.298: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.301: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.304: INFO: Unable to read jessie_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.307: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.310: INFO: Unable to read jessie_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.313: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.316: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.319: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:05.338: INFO: Lookups using dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5171 wheezy_tcp@dns-test-service.dns-5171 wheezy_udp@dns-test-service.dns-5171.svc wheezy_tcp@dns-test-service.dns-5171.svc wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5171 jessie_tcp@dns-test-service.dns-5171 jessie_udp@dns-test-service.dns-5171.svc jessie_tcp@dns-test-service.dns-5171.svc jessie_udp@_http._tcp.dns-test-service.dns-5171.svc jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc] Apr 18 00:34:10.273: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.276: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.278: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.280: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.282: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.284: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.287: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.290: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.309: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.312: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.315: INFO: Unable to read jessie_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.317: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.320: INFO: Unable to read jessie_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.342: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.345: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.348: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:10.365: INFO: Lookups using dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5171 wheezy_tcp@dns-test-service.dns-5171 wheezy_udp@dns-test-service.dns-5171.svc wheezy_tcp@dns-test-service.dns-5171.svc wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5171 jessie_tcp@dns-test-service.dns-5171 jessie_udp@dns-test-service.dns-5171.svc jessie_tcp@dns-test-service.dns-5171.svc jessie_udp@_http._tcp.dns-test-service.dns-5171.svc jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc] Apr 18 00:34:15.246: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.250: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.254: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.257: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.259: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.262: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.264: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.267: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.288: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.294: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.296: INFO: Unable to read jessie_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.299: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.301: INFO: Unable to read jessie_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.303: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.306: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.308: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:15.323: INFO: Lookups using dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5171 wheezy_tcp@dns-test-service.dns-5171 wheezy_udp@dns-test-service.dns-5171.svc wheezy_tcp@dns-test-service.dns-5171.svc wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5171 jessie_tcp@dns-test-service.dns-5171 jessie_udp@dns-test-service.dns-5171.svc jessie_tcp@dns-test-service.dns-5171.svc jessie_udp@_http._tcp.dns-test-service.dns-5171.svc jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc] Apr 18 00:34:20.246: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.250: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.254: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.257: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.260: INFO: Unable to read wheezy_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.264: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.268: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.271: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.290: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.293: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.296: INFO: Unable to read jessie_udp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.299: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171 from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.302: INFO: Unable to read jessie_udp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.305: INFO: Unable to read jessie_tcp@dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.308: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.311: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc from pod dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a: the server could not find the requested resource (get pods dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a) Apr 18 00:34:20.327: INFO: Lookups using dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5171 wheezy_tcp@dns-test-service.dns-5171 wheezy_udp@dns-test-service.dns-5171.svc wheezy_tcp@dns-test-service.dns-5171.svc wheezy_udp@_http._tcp.dns-test-service.dns-5171.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5171.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5171 jessie_tcp@dns-test-service.dns-5171 jessie_udp@dns-test-service.dns-5171.svc jessie_tcp@dns-test-service.dns-5171.svc jessie_udp@_http._tcp.dns-test-service.dns-5171.svc jessie_tcp@_http._tcp.dns-test-service.dns-5171.svc] Apr 18 00:34:25.333: INFO: DNS probes using dns-5171/dns-test-b59526a6-bbbc-4cee-a96c-244dddde608a succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:34:26.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5171" for this suite. • [SLOW TEST:37.111 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":207,"skipped":3487,"failed":0} SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:34:26.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:34:26.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-717" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":208,"skipped":3492,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:34:26.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0418 00:34:27.361847 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 18 00:34:27.361: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:34:27.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4915" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":209,"skipped":3506,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:34:27.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:34:56.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-117" for this suite. • [SLOW TEST:28.739 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:34:56.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 18 00:34:56.859: INFO: Pod name wrapped-volume-race-40145c4d-eb25-44cd-8dda-71c3692809ee: Found 0 pods out of 5 Apr 18 00:35:01.866: INFO: Pod name wrapped-volume-race-40145c4d-eb25-44cd-8dda-71c3692809ee: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-40145c4d-eb25-44cd-8dda-71c3692809ee in namespace emptydir-wrapper-643, will wait for the garbage collector to delete the pods Apr 18 00:35:16.090: INFO: Deleting ReplicationController wrapped-volume-race-40145c4d-eb25-44cd-8dda-71c3692809ee took: 7.040015ms Apr 18 00:35:16.491: INFO: Terminating ReplicationController wrapped-volume-race-40145c4d-eb25-44cd-8dda-71c3692809ee pods took: 400.240949ms STEP: Creating RC which spawns configmap-volume pods Apr 18 00:35:32.921: INFO: Pod name wrapped-volume-race-bfadbf47-e71b-4916-bdbf-711b32c722cb: Found 0 pods out of 5 Apr 18 00:35:37.928: INFO: Pod name wrapped-volume-race-bfadbf47-e71b-4916-bdbf-711b32c722cb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bfadbf47-e71b-4916-bdbf-711b32c722cb in namespace emptydir-wrapper-643, will wait for the garbage collector to delete the pods Apr 18 00:35:52.014: INFO: Deleting ReplicationController wrapped-volume-race-bfadbf47-e71b-4916-bdbf-711b32c722cb took: 12.850316ms Apr 18 00:35:52.314: INFO: Terminating ReplicationController wrapped-volume-race-bfadbf47-e71b-4916-bdbf-711b32c722cb pods took: 300.255303ms STEP: Creating RC which spawns configmap-volume pods Apr 18 00:36:03.531: INFO: Pod name wrapped-volume-race-7b026fd2-f83a-476a-8665-b8dcb58331aa: Found 0 pods out of 5 Apr 18 00:36:08.538: INFO: Pod name wrapped-volume-race-7b026fd2-f83a-476a-8665-b8dcb58331aa: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7b026fd2-f83a-476a-8665-b8dcb58331aa in namespace emptydir-wrapper-643, will wait for the garbage collector to delete the pods Apr 18 00:36:22.622: INFO: Deleting ReplicationController wrapped-volume-race-7b026fd2-f83a-476a-8665-b8dcb58331aa took: 7.407219ms Apr 18 00:36:23.022: INFO: Terminating ReplicationController wrapped-volume-race-7b026fd2-f83a-476a-8665-b8dcb58331aa pods took: 400.250555ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:36:34.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-643" for this suite. • [SLOW TEST:98.305 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":211,"skipped":3535,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:36:34.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:36:34.508: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fab06415-4477-485b-b23d-40bbc9f88beb" in namespace "downward-api-3918" to be "Succeeded or Failed" Apr 18 00:36:34.534: INFO: Pod "downwardapi-volume-fab06415-4477-485b-b23d-40bbc9f88beb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.151323ms Apr 18 00:36:36.538: INFO: Pod "downwardapi-volume-fab06415-4477-485b-b23d-40bbc9f88beb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02995108s Apr 18 00:36:38.543: INFO: Pod "downwardapi-volume-fab06415-4477-485b-b23d-40bbc9f88beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034487788s STEP: Saw pod success Apr 18 00:36:38.543: INFO: Pod "downwardapi-volume-fab06415-4477-485b-b23d-40bbc9f88beb" satisfied condition "Succeeded or Failed" Apr 18 00:36:38.546: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-fab06415-4477-485b-b23d-40bbc9f88beb container client-container: STEP: delete the pod Apr 18 00:36:38.605: INFO: Waiting for pod downwardapi-volume-fab06415-4477-485b-b23d-40bbc9f88beb to disappear Apr 18 00:36:38.612: INFO: Pod downwardapi-volume-fab06415-4477-485b-b23d-40bbc9f88beb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:36:38.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3918" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3567,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:36:38.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:36:38.691: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-58aa9abc-a501-4bba-9bdb-5daeea186ee9" in namespace "security-context-test-3239" to be "Succeeded or Failed" Apr 18 00:36:38.695: INFO: Pod "alpine-nnp-false-58aa9abc-a501-4bba-9bdb-5daeea186ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08281ms Apr 18 00:36:40.876: INFO: Pod "alpine-nnp-false-58aa9abc-a501-4bba-9bdb-5daeea186ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185022074s Apr 18 00:36:42.880: INFO: Pod "alpine-nnp-false-58aa9abc-a501-4bba-9bdb-5daeea186ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188668598s Apr 18 00:36:44.884: INFO: Pod "alpine-nnp-false-58aa9abc-a501-4bba-9bdb-5daeea186ee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.193195883s Apr 18 00:36:44.884: INFO: Pod "alpine-nnp-false-58aa9abc-a501-4bba-9bdb-5daeea186ee9" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:36:44.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3239" for this suite. • [SLOW TEST:6.292 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3579,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:36:44.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:36:45.001: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e77d824-1a7d-4a4a-a26f-f023f94e62f9" in namespace "projected-8210" to be "Succeeded or Failed" Apr 18 00:36:45.007: INFO: Pod "downwardapi-volume-9e77d824-1a7d-4a4a-a26f-f023f94e62f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136401ms Apr 18 00:36:47.012: INFO: Pod "downwardapi-volume-9e77d824-1a7d-4a4a-a26f-f023f94e62f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010279988s Apr 18 00:36:49.016: INFO: Pod "downwardapi-volume-9e77d824-1a7d-4a4a-a26f-f023f94e62f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014266406s STEP: Saw pod success Apr 18 00:36:49.016: INFO: Pod "downwardapi-volume-9e77d824-1a7d-4a4a-a26f-f023f94e62f9" satisfied condition "Succeeded or Failed" Apr 18 00:36:49.018: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9e77d824-1a7d-4a4a-a26f-f023f94e62f9 container client-container: STEP: delete the pod Apr 18 00:36:49.033: INFO: Waiting for pod downwardapi-volume-9e77d824-1a7d-4a4a-a26f-f023f94e62f9 to disappear Apr 18 00:36:49.037: INFO: Pod downwardapi-volume-9e77d824-1a7d-4a4a-a26f-f023f94e62f9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:36:49.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8210" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3588,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:36:49.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:36:49.953: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:36:52.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767009, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767009, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767010, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767009, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:36:55.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767009, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767009, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767010, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767009, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:36:56.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767009, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767009, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767010, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767009, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:36:59.474: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:36:59.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5988" for this suite. STEP: Destroying namespace "webhook-5988-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.633 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":215,"skipped":3617,"failed":0} [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:36:59.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 18 00:36:59.768: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 18 00:37:00.737: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 18 00:37:03.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767020, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767020, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767020, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767020, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:37:05.662: INFO: Waited 624.99948ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:37:06.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9350" for this suite. • [SLOW TEST:6.563 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":216,"skipped":3617,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:37:06.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 18 00:37:06.379: INFO: Waiting up to 5m0s for pod "downward-api-23a6a22a-7e7a-48e6-92c8-dfd3ef557e53" in namespace "downward-api-9346" to be "Succeeded or Failed" Apr 18 00:37:06.396: INFO: Pod "downward-api-23a6a22a-7e7a-48e6-92c8-dfd3ef557e53": Phase="Pending", Reason="", readiness=false. Elapsed: 17.7589ms Apr 18 00:37:08.428: INFO: Pod "downward-api-23a6a22a-7e7a-48e6-92c8-dfd3ef557e53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049747779s Apr 18 00:37:10.433: INFO: Pod "downward-api-23a6a22a-7e7a-48e6-92c8-dfd3ef557e53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05381644s STEP: Saw pod success Apr 18 00:37:10.433: INFO: Pod "downward-api-23a6a22a-7e7a-48e6-92c8-dfd3ef557e53" satisfied condition "Succeeded or Failed" Apr 18 00:37:10.435: INFO: Trying to get logs from node latest-worker pod downward-api-23a6a22a-7e7a-48e6-92c8-dfd3ef557e53 container dapi-container: STEP: delete the pod Apr 18 00:37:10.474: INFO: Waiting for pod downward-api-23a6a22a-7e7a-48e6-92c8-dfd3ef557e53 to disappear Apr 18 00:37:10.480: INFO: Pod downward-api-23a6a22a-7e7a-48e6-92c8-dfd3ef557e53 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:37:10.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9346" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3630,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:37:10.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 18 00:37:10.570: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 00:37:10.580: INFO: Waiting for terminating namespaces to be deleted... Apr 18 00:37:10.582: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 18 00:37:10.588: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:37:10.588: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 00:37:10.588: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:37:10.588: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 00:37:10.588: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 18 00:37:10.592: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:37:10.592: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 00:37:10.592: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 18 00:37:10.592: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0e2b0d71-667f-4972-808a-3202e088a5f2 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-0e2b0d71-667f-4972-808a-3202e088a5f2 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-0e2b0d71-667f-4972-808a-3202e088a5f2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:37:18.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9951" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.263 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":218,"skipped":3640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:37:18.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-7cfb7899-b9d1-445f-8d42-6c89b673c5f8 in namespace container-probe-7980 Apr 18 00:37:22.866: INFO: Started pod test-webserver-7cfb7899-b9d1-445f-8d42-6c89b673c5f8 in namespace container-probe-7980 STEP: checking the pod's current state and verifying that restartCount is present Apr 18 00:37:22.870: INFO: Initial restart count of pod test-webserver-7cfb7899-b9d1-445f-8d42-6c89b673c5f8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:41:23.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7980" for this suite. • [SLOW TEST:244.928 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3675,"failed":0} SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:41:23.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-69204e95-3f6d-4b9d-b9e8-9c3dc76c152a STEP: Creating configMap with name cm-test-opt-upd-6b553257-db41-4736-889f-386535487859 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-69204e95-3f6d-4b9d-b9e8-9c3dc76c152a STEP: Updating configmap cm-test-opt-upd-6b553257-db41-4736-889f-386535487859 STEP: Creating configMap with name cm-test-opt-create-477b2d09-f2be-421f-aadc-4906a25778d5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:42:51.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-356" for this suite. • [SLOW TEST:87.836 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3677,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:42:51.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:42:51.602: INFO: Waiting up to 5m0s for pod "busybox-user-65534-9d424e72-3255-43c6-a0b9-3d456e9341ab" in namespace "security-context-test-114" to be "Succeeded or Failed" Apr 18 00:42:51.606: INFO: Pod "busybox-user-65534-9d424e72-3255-43c6-a0b9-3d456e9341ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008744ms Apr 18 00:42:53.610: INFO: Pod "busybox-user-65534-9d424e72-3255-43c6-a0b9-3d456e9341ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007524922s Apr 18 00:42:55.614: INFO: Pod "busybox-user-65534-9d424e72-3255-43c6-a0b9-3d456e9341ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011584495s Apr 18 00:42:55.614: INFO: Pod "busybox-user-65534-9d424e72-3255-43c6-a0b9-3d456e9341ab" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:42:55.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-114" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3685,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:42:55.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0418 00:43:06.482667 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 18 00:43:06.482: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:43:06.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2116" for this suite. • [SLOW TEST:10.919 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":222,"skipped":3711,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:43:06.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-3dffc2c9-212c-4cce-b57b-7301a6b14a58 STEP: Creating secret with name s-test-opt-upd-4f762763-5ec0-460e-a3cd-e0581fb5c9d1 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3dffc2c9-212c-4cce-b57b-7301a6b14a58 STEP: Updating secret s-test-opt-upd-4f762763-5ec0-460e-a3cd-e0581fb5c9d1 STEP: Creating secret with name s-test-opt-create-e840c1e5-90f0-4eac-af00-f99070f676fe STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:43:18.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7113" for this suite. • [SLOW TEST:12.348 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3715,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:43:18.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-cd9d9c6c-15ad-4fa9-823c-0a695d9b8d2e STEP: Creating secret with name secret-projected-all-test-volume-92663a68-772f-4f37-86dd-f92693150ab3 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 18 00:43:18.981: INFO: Waiting up to 5m0s for pod "projected-volume-0b32eff0-10b7-4e08-8fdc-c5aa1ab1cec4" in namespace "projected-5254" to be "Succeeded or Failed" Apr 18 00:43:18.985: INFO: Pod "projected-volume-0b32eff0-10b7-4e08-8fdc-c5aa1ab1cec4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082392ms Apr 18 00:43:21.212: INFO: Pod "projected-volume-0b32eff0-10b7-4e08-8fdc-c5aa1ab1cec4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231338358s Apr 18 00:43:23.220: INFO: Pod "projected-volume-0b32eff0-10b7-4e08-8fdc-c5aa1ab1cec4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.239086439s STEP: Saw pod success Apr 18 00:43:23.220: INFO: Pod "projected-volume-0b32eff0-10b7-4e08-8fdc-c5aa1ab1cec4" satisfied condition "Succeeded or Failed" Apr 18 00:43:23.223: INFO: Trying to get logs from node latest-worker pod projected-volume-0b32eff0-10b7-4e08-8fdc-c5aa1ab1cec4 container projected-all-volume-test: STEP: delete the pod Apr 18 00:43:23.267: INFO: Waiting for pod projected-volume-0b32eff0-10b7-4e08-8fdc-c5aa1ab1cec4 to disappear Apr 18 00:43:23.282: INFO: Pod projected-volume-0b32eff0-10b7-4e08-8fdc-c5aa1ab1cec4 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:43:23.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5254" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3725,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:43:23.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 18 00:43:23.360: INFO: Waiting up to 5m0s for pod "var-expansion-224c55dc-ade7-4ad3-9b7f-881974e84cb9" in namespace "var-expansion-4971" to be "Succeeded or Failed" Apr 18 00:43:23.375: INFO: Pod "var-expansion-224c55dc-ade7-4ad3-9b7f-881974e84cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.737885ms Apr 18 00:43:25.735: INFO: Pod "var-expansion-224c55dc-ade7-4ad3-9b7f-881974e84cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.374707936s Apr 18 00:43:27.739: INFO: Pod "var-expansion-224c55dc-ade7-4ad3-9b7f-881974e84cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378928013s Apr 18 00:43:29.743: INFO: Pod "var-expansion-224c55dc-ade7-4ad3-9b7f-881974e84cb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.383107871s STEP: Saw pod success Apr 18 00:43:29.743: INFO: Pod "var-expansion-224c55dc-ade7-4ad3-9b7f-881974e84cb9" satisfied condition "Succeeded or Failed" Apr 18 00:43:29.746: INFO: Trying to get logs from node latest-worker pod var-expansion-224c55dc-ade7-4ad3-9b7f-881974e84cb9 container dapi-container: STEP: delete the pod Apr 18 00:43:29.811: INFO: Waiting for pod var-expansion-224c55dc-ade7-4ad3-9b7f-881974e84cb9 to disappear Apr 18 00:43:29.834: INFO: Pod var-expansion-224c55dc-ade7-4ad3-9b7f-881974e84cb9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:43:29.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4971" for this suite. • [SLOW TEST:6.532 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3736,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:43:29.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:43:29.916: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 18 00:43:32.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5290 create -f -' Apr 18 00:43:37.316: INFO: stderr: "" Apr 18 00:43:37.316: INFO: stdout: "e2e-test-crd-publish-openapi-4279-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 18 00:43:37.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5290 delete e2e-test-crd-publish-openapi-4279-crds test-cr' Apr 18 00:43:37.437: INFO: stderr: "" Apr 18 00:43:37.437: INFO: stdout: "e2e-test-crd-publish-openapi-4279-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 18 00:43:37.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5290 apply -f -' Apr 18 00:43:37.657: INFO: stderr: "" Apr 18 00:43:37.657: INFO: stdout: "e2e-test-crd-publish-openapi-4279-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 18 00:43:37.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5290 delete e2e-test-crd-publish-openapi-4279-crds test-cr' Apr 18 00:43:37.803: INFO: stderr: "" Apr 18 00:43:37.803: INFO: stdout: "e2e-test-crd-publish-openapi-4279-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 18 00:43:37.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4279-crds' Apr 18 00:43:38.112: INFO: stderr: "" Apr 18 00:43:38.113: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4279-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:43:40.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5290" for this suite. • [SLOW TEST:10.189 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":226,"skipped":3744,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:43:40.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:43:57.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3654" for this suite. • [SLOW TEST:17.153 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":227,"skipped":3764,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:43:57.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:44:01.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-81" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":228,"skipped":3774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:44:01.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5224 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-5224 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5224 Apr 18 00:44:01.650: INFO: Found 0 stateful pods, waiting for 1 Apr 18 00:44:11.655: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 18 00:44:11.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5224 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 18 00:44:11.953: INFO: stderr: "I0418 00:44:11.790446 2444 log.go:172] (0xc0009a0000) (0xc0008d0000) Create stream\nI0418 00:44:11.790518 2444 log.go:172] (0xc0009a0000) (0xc0008d0000) Stream added, broadcasting: 1\nI0418 00:44:11.792226 2444 log.go:172] (0xc0009a0000) Reply frame received for 1\nI0418 00:44:11.792259 2444 log.go:172] (0xc0009a0000) (0xc0008d00a0) Create stream\nI0418 00:44:11.792268 2444 log.go:172] (0xc0009a0000) (0xc0008d00a0) Stream added, broadcasting: 3\nI0418 00:44:11.793471 2444 log.go:172] (0xc0009a0000) Reply frame received for 3\nI0418 00:44:11.793533 2444 log.go:172] (0xc0009a0000) (0xc0008192c0) Create stream\nI0418 00:44:11.793559 2444 log.go:172] (0xc0009a0000) (0xc0008192c0) Stream added, broadcasting: 5\nI0418 00:44:11.794498 2444 log.go:172] (0xc0009a0000) Reply frame received for 5\nI0418 00:44:11.890081 2444 log.go:172] (0xc0009a0000) Data frame received for 5\nI0418 00:44:11.890110 2444 log.go:172] (0xc0008192c0) (5) Data frame handling\nI0418 00:44:11.890133 2444 log.go:172] (0xc0008192c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0418 00:44:11.946254 2444 log.go:172] (0xc0009a0000) Data frame received for 3\nI0418 00:44:11.946293 2444 log.go:172] (0xc0008d00a0) (3) Data frame handling\nI0418 00:44:11.946316 2444 log.go:172] (0xc0008d00a0) (3) Data frame sent\nI0418 00:44:11.946333 2444 log.go:172] (0xc0009a0000) Data frame received for 3\nI0418 00:44:11.946353 2444 log.go:172] (0xc0008d00a0) (3) Data frame handling\nI0418 00:44:11.946464 2444 log.go:172] (0xc0009a0000) Data frame received for 5\nI0418 00:44:11.946497 2444 log.go:172] (0xc0008192c0) (5) Data frame handling\nI0418 00:44:11.948183 2444 log.go:172] (0xc0009a0000) Data frame received for 1\nI0418 00:44:11.948223 2444 log.go:172] (0xc0008d0000) (1) Data frame handling\nI0418 00:44:11.948307 2444 log.go:172] (0xc0008d0000) (1) Data frame sent\nI0418 00:44:11.948365 2444 log.go:172] (0xc0009a0000) (0xc0008d0000) Stream removed, broadcasting: 1\nI0418 00:44:11.948476 2444 log.go:172] (0xc0009a0000) Go away received\nI0418 00:44:11.948656 2444 log.go:172] (0xc0009a0000) (0xc0008d0000) Stream removed, broadcasting: 1\nI0418 00:44:11.948674 2444 log.go:172] (0xc0009a0000) (0xc0008d00a0) Stream removed, broadcasting: 3\nI0418 00:44:11.948682 2444 log.go:172] (0xc0009a0000) (0xc0008192c0) Stream removed, broadcasting: 5\n" Apr 18 00:44:11.954: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 18 00:44:11.954: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 18 00:44:11.961: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 18 00:44:21.965: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 18 00:44:21.965: INFO: Waiting for statefulset status.replicas updated to 0 Apr 18 00:44:21.996: INFO: POD NODE PHASE GRACE CONDITIONS Apr 18 00:44:21.996: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC }] Apr 18 00:44:21.996: INFO: Apr 18 00:44:21.996: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 18 00:44:23.001: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.977385902s Apr 18 00:44:24.006: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971910045s Apr 18 00:44:25.013: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.966729851s Apr 18 00:44:26.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.960519233s Apr 18 00:44:27.022: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.956709598s Apr 18 00:44:28.033: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.95115526s Apr 18 00:44:29.037: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.939624181s Apr 18 00:44:30.042: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.935676005s Apr 18 00:44:31.048: INFO: Verifying statefulset ss doesn't scale past 3 for another 930.694927ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5224 Apr 18 00:44:32.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5224 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 18 00:44:32.283: INFO: stderr: "I0418 00:44:32.194122 2466 log.go:172] (0xc00058e000) (0xc000645720) Create stream\nI0418 00:44:32.194182 2466 log.go:172] (0xc00058e000) (0xc000645720) Stream added, broadcasting: 1\nI0418 00:44:32.196642 2466 log.go:172] (0xc00058e000) Reply frame received for 1\nI0418 00:44:32.196690 2466 log.go:172] (0xc00058e000) (0xc000a0e000) Create stream\nI0418 00:44:32.196703 2466 log.go:172] (0xc00058e000) (0xc000a0e000) Stream added, broadcasting: 3\nI0418 00:44:32.197689 2466 log.go:172] (0xc00058e000) Reply frame received for 3\nI0418 00:44:32.197718 2466 log.go:172] (0xc00058e000) (0xc000a0e0a0) Create stream\nI0418 00:44:32.197728 2466 log.go:172] (0xc00058e000) (0xc000a0e0a0) Stream added, broadcasting: 5\nI0418 00:44:32.198611 2466 log.go:172] (0xc00058e000) Reply frame received for 5\nI0418 00:44:32.277364 2466 log.go:172] (0xc00058e000) Data frame received for 5\nI0418 00:44:32.277404 2466 log.go:172] (0xc000a0e0a0) (5) Data frame handling\nI0418 00:44:32.277420 2466 log.go:172] (0xc000a0e0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0418 00:44:32.277443 2466 log.go:172] (0xc00058e000) Data frame received for 3\nI0418 00:44:32.277454 2466 log.go:172] (0xc000a0e000) (3) Data frame handling\nI0418 00:44:32.277461 2466 log.go:172] (0xc000a0e000) (3) Data frame sent\nI0418 00:44:32.277523 2466 log.go:172] (0xc00058e000) Data frame received for 5\nI0418 00:44:32.277540 2466 log.go:172] (0xc000a0e0a0) (5) Data frame handling\nI0418 00:44:32.277597 2466 log.go:172] (0xc00058e000) Data frame received for 3\nI0418 00:44:32.277612 2466 log.go:172] (0xc000a0e000) (3) Data frame handling\nI0418 00:44:32.279211 2466 log.go:172] (0xc00058e000) Data frame received for 1\nI0418 00:44:32.279246 2466 log.go:172] (0xc000645720) (1) Data frame handling\nI0418 00:44:32.279282 2466 log.go:172] (0xc000645720) (1) Data frame sent\nI0418 00:44:32.279319 2466 log.go:172] (0xc00058e000) (0xc000645720) Stream removed, broadcasting: 1\nI0418 00:44:32.279349 2466 log.go:172] (0xc00058e000) Go away received\nI0418 00:44:32.279636 2466 log.go:172] (0xc00058e000) (0xc000645720) Stream removed, broadcasting: 1\nI0418 00:44:32.279649 2466 log.go:172] (0xc00058e000) (0xc000a0e000) Stream removed, broadcasting: 3\nI0418 00:44:32.279667 2466 log.go:172] (0xc00058e000) (0xc000a0e0a0) Stream removed, broadcasting: 5\n" Apr 18 00:44:32.283: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 18 00:44:32.283: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 18 00:44:32.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5224 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 18 00:44:32.536: INFO: stderr: "I0418 00:44:32.447547 2488 log.go:172] (0xc000744a50) (0xc000740140) Create stream\nI0418 00:44:32.447598 2488 log.go:172] (0xc000744a50) (0xc000740140) Stream added, broadcasting: 1\nI0418 00:44:32.450265 2488 log.go:172] (0xc000744a50) Reply frame received for 1\nI0418 00:44:32.450300 2488 log.go:172] (0xc000744a50) (0xc000593400) Create stream\nI0418 00:44:32.450310 2488 log.go:172] (0xc000744a50) (0xc000593400) Stream added, broadcasting: 3\nI0418 00:44:32.451335 2488 log.go:172] (0xc000744a50) Reply frame received for 3\nI0418 00:44:32.451356 2488 log.go:172] (0xc000744a50) (0xc0007401e0) Create stream\nI0418 00:44:32.451364 2488 log.go:172] (0xc000744a50) (0xc0007401e0) Stream added, broadcasting: 5\nI0418 00:44:32.452694 2488 log.go:172] (0xc000744a50) Reply frame received for 5\nI0418 00:44:32.524480 2488 log.go:172] (0xc000744a50) Data frame received for 5\nI0418 00:44:32.524527 2488 log.go:172] (0xc0007401e0) (5) Data frame handling\nI0418 00:44:32.524543 2488 log.go:172] (0xc0007401e0) (5) Data frame sent\nI0418 00:44:32.524553 2488 log.go:172] (0xc000744a50) Data frame received for 5\nI0418 00:44:32.524564 2488 log.go:172] (0xc0007401e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0418 00:44:32.524602 2488 log.go:172] (0xc000744a50) Data frame received for 3\nI0418 00:44:32.524627 2488 log.go:172] (0xc000593400) (3) Data frame handling\nI0418 00:44:32.524659 2488 log.go:172] (0xc000593400) (3) Data frame sent\nI0418 00:44:32.524676 2488 log.go:172] (0xc000744a50) Data frame received for 3\nI0418 00:44:32.524695 2488 log.go:172] (0xc000593400) (3) Data frame handling\nI0418 00:44:32.526365 2488 log.go:172] (0xc000744a50) Data frame received for 1\nI0418 00:44:32.526390 2488 log.go:172] (0xc000740140) (1) Data frame handling\nI0418 00:44:32.526410 2488 log.go:172] (0xc000740140) (1) Data frame sent\nI0418 00:44:32.526435 2488 log.go:172] (0xc000744a50) (0xc000740140) Stream removed, broadcasting: 1\nI0418 00:44:32.526450 2488 log.go:172] (0xc000744a50) Go away received\nI0418 00:44:32.526809 2488 log.go:172] (0xc000744a50) (0xc000740140) Stream removed, broadcasting: 1\nI0418 00:44:32.526829 2488 log.go:172] (0xc000744a50) (0xc000593400) Stream removed, broadcasting: 3\nI0418 00:44:32.526841 2488 log.go:172] (0xc000744a50) (0xc0007401e0) Stream removed, broadcasting: 5\n" Apr 18 00:44:32.536: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 18 00:44:32.536: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 18 00:44:32.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5224 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 18 00:44:32.756: INFO: stderr: "I0418 00:44:32.690286 2511 log.go:172] (0xc0000e8370) (0xc0008192c0) Create stream\nI0418 00:44:32.690333 2511 log.go:172] (0xc0000e8370) (0xc0008192c0) Stream added, broadcasting: 1\nI0418 00:44:32.692300 2511 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0418 00:44:32.692339 2511 log.go:172] (0xc0000e8370) (0xc000998000) Create stream\nI0418 00:44:32.692351 2511 log.go:172] (0xc0000e8370) (0xc000998000) Stream added, broadcasting: 3\nI0418 00:44:32.693403 2511 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0418 00:44:32.693442 2511 log.go:172] (0xc0000e8370) (0xc0009a6000) Create stream\nI0418 00:44:32.693458 2511 log.go:172] (0xc0000e8370) (0xc0009a6000) Stream added, broadcasting: 5\nI0418 00:44:32.694603 2511 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0418 00:44:32.749600 2511 log.go:172] (0xc0000e8370) Data frame received for 3\nI0418 00:44:32.749630 2511 log.go:172] (0xc000998000) (3) Data frame handling\nI0418 00:44:32.749651 2511 log.go:172] (0xc000998000) (3) Data frame sent\nI0418 00:44:32.749662 2511 log.go:172] (0xc0000e8370) Data frame received for 3\nI0418 00:44:32.749670 2511 log.go:172] (0xc000998000) (3) Data frame handling\nI0418 00:44:32.749827 2511 log.go:172] (0xc0000e8370) Data frame received for 5\nI0418 00:44:32.749859 2511 log.go:172] (0xc0009a6000) (5) Data frame handling\nI0418 00:44:32.749879 2511 log.go:172] (0xc0009a6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0418 00:44:32.749892 2511 log.go:172] (0xc0000e8370) Data frame received for 5\nI0418 00:44:32.749933 2511 log.go:172] (0xc0009a6000) (5) Data frame handling\nI0418 00:44:32.751340 2511 log.go:172] (0xc0000e8370) Data frame received for 1\nI0418 00:44:32.751363 2511 log.go:172] (0xc0008192c0) (1) Data frame handling\nI0418 00:44:32.751377 2511 log.go:172] (0xc0008192c0) (1) Data frame sent\nI0418 00:44:32.751392 2511 log.go:172] (0xc0000e8370) (0xc0008192c0) Stream removed, broadcasting: 1\nI0418 00:44:32.751410 2511 log.go:172] (0xc0000e8370) Go away received\nI0418 00:44:32.751744 2511 log.go:172] (0xc0000e8370) (0xc0008192c0) Stream removed, broadcasting: 1\nI0418 00:44:32.751760 2511 log.go:172] (0xc0000e8370) (0xc000998000) Stream removed, broadcasting: 3\nI0418 00:44:32.751769 2511 log.go:172] (0xc0000e8370) (0xc0009a6000) Stream removed, broadcasting: 5\n" Apr 18 00:44:32.756: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 18 00:44:32.756: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 18 00:44:32.760: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 18 00:44:32.760: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 18 00:44:32.760: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 18 00:44:32.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5224 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 18 00:44:32.963: INFO: stderr: "I0418 00:44:32.894048 2532 log.go:172] (0xc000b4b290) (0xc000c04500) Create stream\nI0418 00:44:32.894103 2532 log.go:172] (0xc000b4b290) (0xc000c04500) Stream added, broadcasting: 1\nI0418 00:44:32.898721 2532 log.go:172] (0xc000b4b290) Reply frame received for 1\nI0418 00:44:32.898791 2532 log.go:172] (0xc000b4b290) (0xc0006a77c0) Create stream\nI0418 00:44:32.898820 2532 log.go:172] (0xc000b4b290) (0xc0006a77c0) Stream added, broadcasting: 3\nI0418 00:44:32.899943 2532 log.go:172] (0xc000b4b290) Reply frame received for 3\nI0418 00:44:32.899968 2532 log.go:172] (0xc000b4b290) (0xc000544be0) Create stream\nI0418 00:44:32.899978 2532 log.go:172] (0xc000b4b290) (0xc000544be0) Stream added, broadcasting: 5\nI0418 00:44:32.900971 2532 log.go:172] (0xc000b4b290) Reply frame received for 5\nI0418 00:44:32.956832 2532 log.go:172] (0xc000b4b290) Data frame received for 5\nI0418 00:44:32.956857 2532 log.go:172] (0xc000544be0) (5) Data frame handling\nI0418 00:44:32.956867 2532 log.go:172] (0xc000544be0) (5) Data frame sent\nI0418 00:44:32.956875 2532 log.go:172] (0xc000b4b290) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0418 00:44:32.956927 2532 log.go:172] (0xc000b4b290) Data frame received for 3\nI0418 00:44:32.956974 2532 log.go:172] (0xc0006a77c0) (3) Data frame handling\nI0418 00:44:32.956990 2532 log.go:172] (0xc0006a77c0) (3) Data frame sent\nI0418 00:44:32.957000 2532 log.go:172] (0xc000b4b290) Data frame received for 3\nI0418 00:44:32.957010 2532 log.go:172] (0xc0006a77c0) (3) Data frame handling\nI0418 00:44:32.957052 2532 log.go:172] (0xc000544be0) (5) Data frame handling\nI0418 00:44:32.959095 2532 log.go:172] (0xc000b4b290) Data frame received for 1\nI0418 00:44:32.959117 2532 log.go:172] (0xc000c04500) (1) Data frame handling\nI0418 00:44:32.959134 2532 log.go:172] (0xc000c04500) (1) Data frame sent\nI0418 00:44:32.959150 2532 log.go:172] (0xc000b4b290) (0xc000c04500) Stream removed, broadcasting: 1\nI0418 00:44:32.959182 2532 log.go:172] (0xc000b4b290) Go away received\nI0418 00:44:32.959484 2532 log.go:172] (0xc000b4b290) (0xc000c04500) Stream removed, broadcasting: 1\nI0418 00:44:32.959501 2532 log.go:172] (0xc000b4b290) (0xc0006a77c0) Stream removed, broadcasting: 3\nI0418 00:44:32.959510 2532 log.go:172] (0xc000b4b290) (0xc000544be0) Stream removed, broadcasting: 5\n" Apr 18 00:44:32.963: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 18 00:44:32.963: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 18 00:44:32.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5224 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 18 00:44:33.209: INFO: stderr: "I0418 00:44:33.083683 2554 log.go:172] (0xc00003afd0) (0xc000906820) Create stream\nI0418 00:44:33.083746 2554 log.go:172] (0xc00003afd0) (0xc000906820) Stream added, broadcasting: 1\nI0418 00:44:33.089534 2554 log.go:172] (0xc00003afd0) Reply frame received for 1\nI0418 00:44:33.089583 2554 log.go:172] (0xc00003afd0) (0xc0005ff540) Create stream\nI0418 00:44:33.089604 2554 log.go:172] (0xc00003afd0) (0xc0005ff540) Stream added, broadcasting: 3\nI0418 00:44:33.090513 2554 log.go:172] (0xc00003afd0) Reply frame received for 3\nI0418 00:44:33.090540 2554 log.go:172] (0xc00003afd0) (0xc00050e960) Create stream\nI0418 00:44:33.090565 2554 log.go:172] (0xc00003afd0) (0xc00050e960) Stream added, broadcasting: 5\nI0418 00:44:33.091499 2554 log.go:172] (0xc00003afd0) Reply frame received for 5\nI0418 00:44:33.160984 2554 log.go:172] (0xc00003afd0) Data frame received for 5\nI0418 00:44:33.161012 2554 log.go:172] (0xc00050e960) (5) Data frame handling\nI0418 00:44:33.161031 2554 log.go:172] (0xc00050e960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0418 00:44:33.202496 2554 log.go:172] (0xc00003afd0) Data frame received for 5\nI0418 00:44:33.202562 2554 log.go:172] (0xc00050e960) (5) Data frame handling\nI0418 00:44:33.202598 2554 log.go:172] (0xc00003afd0) Data frame received for 3\nI0418 00:44:33.202624 2554 log.go:172] (0xc0005ff540) (3) Data frame handling\nI0418 00:44:33.202643 2554 log.go:172] (0xc0005ff540) (3) Data frame sent\nI0418 00:44:33.202658 2554 log.go:172] (0xc00003afd0) Data frame received for 3\nI0418 00:44:33.202672 2554 log.go:172] (0xc0005ff540) (3) Data frame handling\nI0418 00:44:33.204828 2554 log.go:172] (0xc00003afd0) Data frame received for 1\nI0418 00:44:33.204874 2554 log.go:172] (0xc000906820) (1) Data frame handling\nI0418 00:44:33.204908 2554 log.go:172] (0xc000906820) (1) Data frame sent\nI0418 00:44:33.204993 2554 log.go:172] (0xc00003afd0) (0xc000906820) Stream removed, broadcasting: 1\nI0418 00:44:33.205048 2554 log.go:172] (0xc00003afd0) Go away received\nI0418 00:44:33.205650 2554 log.go:172] (0xc00003afd0) (0xc000906820) Stream removed, broadcasting: 1\nI0418 00:44:33.205688 2554 log.go:172] (0xc00003afd0) (0xc0005ff540) Stream removed, broadcasting: 3\nI0418 00:44:33.205705 2554 log.go:172] (0xc00003afd0) (0xc00050e960) Stream removed, broadcasting: 5\n" Apr 18 00:44:33.209: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 18 00:44:33.209: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 18 00:44:33.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5224 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 18 00:44:33.444: INFO: stderr: "I0418 00:44:33.337900 2574 log.go:172] (0xc00003a0b0) (0xc0007cb360) Create stream\nI0418 00:44:33.337981 2574 log.go:172] (0xc00003a0b0) (0xc0007cb360) Stream added, broadcasting: 1\nI0418 00:44:33.340938 2574 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0418 00:44:33.340983 2574 log.go:172] (0xc00003a0b0) (0xc000aec000) Create stream\nI0418 00:44:33.340998 2574 log.go:172] (0xc00003a0b0) (0xc000aec000) Stream added, broadcasting: 3\nI0418 00:44:33.342021 2574 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0418 00:44:33.342069 2574 log.go:172] (0xc00003a0b0) (0xc000580000) Create stream\nI0418 00:44:33.342089 2574 log.go:172] (0xc00003a0b0) (0xc000580000) Stream added, broadcasting: 5\nI0418 00:44:33.342927 2574 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0418 00:44:33.410758 2574 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0418 00:44:33.410799 2574 log.go:172] (0xc000580000) (5) Data frame handling\nI0418 00:44:33.410821 2574 log.go:172] (0xc000580000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0418 00:44:33.436713 2574 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0418 00:44:33.436756 2574 log.go:172] (0xc000aec000) (3) Data frame handling\nI0418 00:44:33.436862 2574 log.go:172] (0xc000aec000) (3) Data frame sent\nI0418 00:44:33.437037 2574 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0418 00:44:33.437060 2574 log.go:172] (0xc000aec000) (3) Data frame handling\nI0418 00:44:33.437093 2574 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0418 00:44:33.437232 2574 log.go:172] (0xc000580000) (5) Data frame handling\nI0418 00:44:33.438884 2574 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0418 00:44:33.438907 2574 log.go:172] (0xc0007cb360) (1) Data frame handling\nI0418 00:44:33.438923 2574 log.go:172] (0xc0007cb360) (1) Data frame sent\nI0418 00:44:33.438951 2574 log.go:172] (0xc00003a0b0) (0xc0007cb360) Stream removed, broadcasting: 1\nI0418 00:44:33.439067 2574 log.go:172] (0xc00003a0b0) Go away received\nI0418 00:44:33.439312 2574 log.go:172] (0xc00003a0b0) (0xc0007cb360) Stream removed, broadcasting: 1\nI0418 00:44:33.439328 2574 log.go:172] (0xc00003a0b0) (0xc000aec000) Stream removed, broadcasting: 3\nI0418 00:44:33.439346 2574 log.go:172] (0xc00003a0b0) (0xc000580000) Stream removed, broadcasting: 5\n" Apr 18 00:44:33.444: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 18 00:44:33.444: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 18 00:44:33.444: INFO: Waiting for statefulset status.replicas updated to 0 Apr 18 00:44:33.448: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 18 00:44:43.458: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 18 00:44:43.458: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 18 00:44:43.458: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 18 00:44:43.471: INFO: POD NODE PHASE GRACE CONDITIONS Apr 18 00:44:43.471: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC }] Apr 18 00:44:43.471: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:21 +0000 UTC }] Apr 18 00:44:43.471: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC }] Apr 18 00:44:43.471: INFO: Apr 18 00:44:43.471: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 18 00:44:45.868: INFO: POD NODE PHASE GRACE CONDITIONS Apr 18 00:44:45.868: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC }] Apr 18 00:44:45.868: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:21 +0000 UTC }] Apr 18 00:44:45.868: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC }] Apr 18 00:44:45.868: INFO: Apr 18 00:44:45.868: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 18 00:44:46.884: INFO: POD NODE PHASE GRACE CONDITIONS Apr 18 00:44:46.884: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC }] Apr 18 00:44:46.884: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:21 +0000 UTC }] Apr 18 00:44:46.884: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC }] Apr 18 00:44:46.884: INFO: Apr 18 00:44:46.884: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 18 00:44:47.890: INFO: POD NODE PHASE GRACE CONDITIONS Apr 18 00:44:47.890: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC }] Apr 18 00:44:47.890: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:21 +0000 UTC }] Apr 18 00:44:47.890: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC }] Apr 18 00:44:47.890: INFO: Apr 18 00:44:47.890: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 18 00:44:48.895: INFO: POD NODE PHASE GRACE CONDITIONS Apr 18 00:44:48.895: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC }] Apr 18 00:44:48.895: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:21 +0000 UTC }] Apr 18 00:44:48.895: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC }] Apr 18 00:44:48.895: INFO: Apr 18 00:44:48.895: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 18 00:44:49.899: INFO: POD NODE PHASE GRACE CONDITIONS Apr 18 00:44:49.899: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC }] Apr 18 00:44:49.899: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:21 +0000 UTC }] Apr 18 00:44:49.899: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC }] Apr 18 00:44:49.900: INFO: Apr 18 00:44:49.900: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 18 00:44:50.905: INFO: POD NODE PHASE GRACE CONDITIONS Apr 18 00:44:50.905: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC }] Apr 18 00:44:50.905: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:21 +0000 UTC }] Apr 18 00:44:50.905: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC }] Apr 18 00:44:50.905: INFO: Apr 18 00:44:50.905: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 18 00:44:51.922: INFO: POD NODE PHASE GRACE CONDITIONS Apr 18 00:44:51.922: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:01 +0000 UTC }] Apr 18 00:44:51.922: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:21 +0000 UTC }] Apr 18 00:44:51.922: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC }] Apr 18 00:44:51.922: INFO: Apr 18 00:44:51.922: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 18 00:44:52.927: INFO: POD NODE PHASE GRACE CONDITIONS Apr 18 00:44:52.927: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:21 +0000 UTC }] Apr 18 00:44:52.927: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-18 00:44:22 +0000 UTC }] Apr 18 00:44:52.927: INFO: Apr 18 00:44:52.927: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5224 Apr 18 00:44:53.932: INFO: Scaling statefulset ss to 0 Apr 18 00:44:53.942: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 18 00:44:53.944: INFO: Deleting all statefulset in ns statefulset-5224 Apr 18 00:44:53.947: INFO: Scaling statefulset ss to 0 Apr 18 00:44:53.953: INFO: Waiting for statefulset status.replicas updated to 0 Apr 18 00:44:53.955: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:44:53.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5224" for this suite. • [SLOW TEST:52.639 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":229,"skipped":3797,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:44:54.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:44:54.054: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:44:55.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7823" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":230,"skipped":3800,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:44:55.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-7e037f8a-763a-49ac-9de7-2ed217a512e0 STEP: Creating a pod to test consume configMaps Apr 18 00:44:55.369: INFO: Waiting up to 5m0s for pod "pod-configmaps-e408a4eb-b94c-4eb2-a8ea-1e274097c3d1" in namespace "configmap-486" to be "Succeeded or Failed" Apr 18 00:44:55.372: INFO: Pod "pod-configmaps-e408a4eb-b94c-4eb2-a8ea-1e274097c3d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.983684ms Apr 18 00:44:57.376: INFO: Pod "pod-configmaps-e408a4eb-b94c-4eb2-a8ea-1e274097c3d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00715411s Apr 18 00:44:59.399: INFO: Pod "pod-configmaps-e408a4eb-b94c-4eb2-a8ea-1e274097c3d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029972803s STEP: Saw pod success Apr 18 00:44:59.399: INFO: Pod "pod-configmaps-e408a4eb-b94c-4eb2-a8ea-1e274097c3d1" satisfied condition "Succeeded or Failed" Apr 18 00:44:59.420: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e408a4eb-b94c-4eb2-a8ea-1e274097c3d1 container configmap-volume-test: STEP: delete the pod Apr 18 00:44:59.447: INFO: Waiting for pod pod-configmaps-e408a4eb-b94c-4eb2-a8ea-1e274097c3d1 to disappear Apr 18 00:44:59.453: INFO: Pod pod-configmaps-e408a4eb-b94c-4eb2-a8ea-1e274097c3d1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:44:59.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-486" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3808,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:44:59.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:44:59.555: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Pending, waiting for it to be Running (with Ready = true) Apr 18 00:45:01.560: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Pending, waiting for it to be Running (with Ready = true) Apr 18 00:45:03.559: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Running (Ready = false) Apr 18 00:45:05.560: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Running (Ready = false) Apr 18 00:45:07.559: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Running (Ready = false) Apr 18 00:45:09.559: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Running (Ready = false) Apr 18 00:45:11.559: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Running (Ready = false) Apr 18 00:45:13.560: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Running (Ready = false) Apr 18 00:45:15.559: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Running (Ready = false) Apr 18 00:45:17.560: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Running (Ready = false) Apr 18 00:45:19.560: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Running (Ready = false) Apr 18 00:45:21.560: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Running (Ready = false) Apr 18 00:45:23.560: INFO: The status of Pod test-webserver-865b078a-8826-4c5b-b8ae-07c90d4c91e4 is Running (Ready = true) Apr 18 00:45:23.562: INFO: Container started at 2020-04-18 00:45:01 +0000 UTC, pod became ready at 2020-04-18 00:45:21 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:45:23.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6525" for this suite. • [SLOW TEST:24.102 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3826,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:45:23.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-1f894fc9-2bdf-452a-bafa-640ef22f292d [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:45:23.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-949" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":233,"skipped":3859,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:45:23.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:45:23.698: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cd5d93c-440f-4db5-a23d-d5d1c2392119" in namespace "downward-api-9995" to be "Succeeded or Failed" Apr 18 00:45:23.701: INFO: Pod "downwardapi-volume-7cd5d93c-440f-4db5-a23d-d5d1c2392119": Phase="Pending", Reason="", readiness=false. Elapsed: 3.554999ms Apr 18 00:45:25.706: INFO: Pod "downwardapi-volume-7cd5d93c-440f-4db5-a23d-d5d1c2392119": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007941746s Apr 18 00:45:27.710: INFO: Pod "downwardapi-volume-7cd5d93c-440f-4db5-a23d-d5d1c2392119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012591982s STEP: Saw pod success Apr 18 00:45:27.710: INFO: Pod "downwardapi-volume-7cd5d93c-440f-4db5-a23d-d5d1c2392119" satisfied condition "Succeeded or Failed" Apr 18 00:45:27.714: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7cd5d93c-440f-4db5-a23d-d5d1c2392119 container client-container: STEP: delete the pod Apr 18 00:45:27.760: INFO: Waiting for pod downwardapi-volume-7cd5d93c-440f-4db5-a23d-d5d1c2392119 to disappear Apr 18 00:45:27.782: INFO: Pod downwardapi-volume-7cd5d93c-440f-4db5-a23d-d5d1c2392119 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:45:27.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9995" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3883,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:45:27.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-0bc92e42-b3ec-40b4-b907-87c1832db984 STEP: Creating a pod to test consume configMaps Apr 18 00:45:27.880: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b386a439-d4a7-4413-862a-7bfd70b3f53f" in namespace "projected-1291" to be "Succeeded or Failed" Apr 18 00:45:27.887: INFO: Pod "pod-projected-configmaps-b386a439-d4a7-4413-862a-7bfd70b3f53f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365871ms Apr 18 00:45:29.891: INFO: Pod "pod-projected-configmaps-b386a439-d4a7-4413-862a-7bfd70b3f53f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010569219s Apr 18 00:45:31.895: INFO: Pod "pod-projected-configmaps-b386a439-d4a7-4413-862a-7bfd70b3f53f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014904764s STEP: Saw pod success Apr 18 00:45:31.895: INFO: Pod "pod-projected-configmaps-b386a439-d4a7-4413-862a-7bfd70b3f53f" satisfied condition "Succeeded or Failed" Apr 18 00:45:31.899: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b386a439-d4a7-4413-862a-7bfd70b3f53f container projected-configmap-volume-test: STEP: delete the pod Apr 18 00:45:31.976: INFO: Waiting for pod pod-projected-configmaps-b386a439-d4a7-4413-862a-7bfd70b3f53f to disappear Apr 18 00:45:31.999: INFO: Pod pod-projected-configmaps-b386a439-d4a7-4413-862a-7bfd70b3f53f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:45:31.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1291" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":3883,"failed":0} ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:45:32.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:45:32.049: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 18 00:45:32.071: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 18 00:45:37.077: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 18 00:45:37.077: INFO: Creating deployment "test-rolling-update-deployment" Apr 18 00:45:37.083: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 18 00:45:37.091: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 18 00:45:39.121: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 18 00:45:39.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767537, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767537, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767537, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767537, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 18 00:45:41.128: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 18 00:45:41.138: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4512 /apis/apps/v1/namespaces/deployment-4512/deployments/test-rolling-update-deployment 87d588ae-7faf-40bb-a82a-bf002a782284 8944195 1 2020-04-18 00:45:37 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc006d2a108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-18 00:45:37 +0000 UTC,LastTransitionTime:2020-04-18 00:45:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-18 00:45:40 +0000 UTC,LastTransitionTime:2020-04-18 00:45:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 18 00:45:41.142: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-4512 /apis/apps/v1/namespaces/deployment-4512/replicasets/test-rolling-update-deployment-664dd8fc7f e63693c2-27c9-42f6-8ad0-6ec0ac5bdb8d 8944183 1 2020-04-18 00:45:37 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 87d588ae-7faf-40bb-a82a-bf002a782284 0xc002b0edf7 0xc002b0edf8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b0ee68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 18 00:45:41.142: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 18 00:45:41.142: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4512 /apis/apps/v1/namespaces/deployment-4512/replicasets/test-rolling-update-controller 63e9a981-aa2a-4bc8-ac3b-974e482b90c1 8944194 2 2020-04-18 00:45:32 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 87d588ae-7faf-40bb-a82a-bf002a782284 0xc002b0ed17 0xc002b0ed18}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002b0ed78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 18 00:45:41.145: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-w296z" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-w296z test-rolling-update-deployment-664dd8fc7f- deployment-4512 /api/v1/namespaces/deployment-4512/pods/test-rolling-update-deployment-664dd8fc7f-w296z e4899fcd-2513-4f7c-9411-75c936666c4c 8944182 0 2020-04-18 00:45:37 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f e63693c2-27c9-42f6-8ad0-6ec0ac5bdb8d 0xc002b0f7b7 0xc002b0f7b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-szt8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-szt8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-szt8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:45:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:45:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:45:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:45:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.5,StartTime:2020-04-18 00:45:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-18 00:45:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://0f5fd4cfe287e4c37603859c5720d40d6616338e59cb73a252baeb3f2204578f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:45:41.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4512" for this suite. • [SLOW TEST:9.191 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":236,"skipped":3883,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:45:41.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-c7e32bf1-0fa5-4dc5-9946-00c7d2192562 STEP: Creating a pod to test consume configMaps Apr 18 00:45:41.262: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-392076e8-63d7-48e1-afbe-fe1333783058" in namespace "projected-7873" to be "Succeeded or Failed" Apr 18 00:45:41.264: INFO: Pod "pod-projected-configmaps-392076e8-63d7-48e1-afbe-fe1333783058": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089375ms Apr 18 00:45:43.289: INFO: Pod "pod-projected-configmaps-392076e8-63d7-48e1-afbe-fe1333783058": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02657833s Apr 18 00:45:45.294: INFO: Pod "pod-projected-configmaps-392076e8-63d7-48e1-afbe-fe1333783058": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031502182s STEP: Saw pod success Apr 18 00:45:45.294: INFO: Pod "pod-projected-configmaps-392076e8-63d7-48e1-afbe-fe1333783058" satisfied condition "Succeeded or Failed" Apr 18 00:45:45.297: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-392076e8-63d7-48e1-afbe-fe1333783058 container projected-configmap-volume-test: STEP: delete the pod Apr 18 00:45:45.330: INFO: Waiting for pod pod-projected-configmaps-392076e8-63d7-48e1-afbe-fe1333783058 to disappear Apr 18 00:45:45.336: INFO: Pod pod-projected-configmaps-392076e8-63d7-48e1-afbe-fe1333783058 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:45:45.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7873" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":3887,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:45:45.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-5762/configmap-test-f7f1478e-cab8-4153-8f8f-491e2ae70e75 STEP: Creating a pod to test consume configMaps Apr 18 00:45:45.427: INFO: Waiting up to 5m0s for pod "pod-configmaps-194a6368-9108-4f0a-a55f-31330e8921d5" in namespace "configmap-5762" to be "Succeeded or Failed" Apr 18 00:45:45.430: INFO: Pod "pod-configmaps-194a6368-9108-4f0a-a55f-31330e8921d5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.419833ms Apr 18 00:45:47.434: INFO: Pod "pod-configmaps-194a6368-9108-4f0a-a55f-31330e8921d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006964097s Apr 18 00:45:49.437: INFO: Pod "pod-configmaps-194a6368-9108-4f0a-a55f-31330e8921d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010416068s STEP: Saw pod success Apr 18 00:45:49.437: INFO: Pod "pod-configmaps-194a6368-9108-4f0a-a55f-31330e8921d5" satisfied condition "Succeeded or Failed" Apr 18 00:45:49.439: INFO: Trying to get logs from node latest-worker pod pod-configmaps-194a6368-9108-4f0a-a55f-31330e8921d5 container env-test: STEP: delete the pod Apr 18 00:45:49.462: INFO: Waiting for pod pod-configmaps-194a6368-9108-4f0a-a55f-31330e8921d5 to disappear Apr 18 00:45:49.489: INFO: Pod pod-configmaps-194a6368-9108-4f0a-a55f-31330e8921d5 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:45:49.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5762" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":3962,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:45:49.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 18 00:45:49.533: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:45:49.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8209" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":239,"skipped":3974,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:45:49.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:46:07.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4477" for this suite. • [SLOW TEST:18.064 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":240,"skipped":4010,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:46:07.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-5052/secret-test-57fba9f5-a4cd-4620-b0d3-1857217c4249 STEP: Creating a pod to test consume secrets Apr 18 00:46:07.757: INFO: Waiting up to 5m0s for pod "pod-configmaps-71f08e21-2ce2-4792-aa52-5b9c87092ea6" in namespace "secrets-5052" to be "Succeeded or Failed" Apr 18 00:46:07.760: INFO: Pod "pod-configmaps-71f08e21-2ce2-4792-aa52-5b9c87092ea6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.258527ms Apr 18 00:46:09.763: INFO: Pod "pod-configmaps-71f08e21-2ce2-4792-aa52-5b9c87092ea6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006156426s Apr 18 00:46:11.767: INFO: Pod "pod-configmaps-71f08e21-2ce2-4792-aa52-5b9c87092ea6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010321304s STEP: Saw pod success Apr 18 00:46:11.767: INFO: Pod "pod-configmaps-71f08e21-2ce2-4792-aa52-5b9c87092ea6" satisfied condition "Succeeded or Failed" Apr 18 00:46:11.771: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-71f08e21-2ce2-4792-aa52-5b9c87092ea6 container env-test: STEP: delete the pod Apr 18 00:46:11.792: INFO: Waiting for pod pod-configmaps-71f08e21-2ce2-4792-aa52-5b9c87092ea6 to disappear Apr 18 00:46:11.796: INFO: Pod pod-configmaps-71f08e21-2ce2-4792-aa52-5b9c87092ea6 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:46:11.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5052" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:46:11.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 18 00:46:19.874: INFO: &Pod{ObjectMeta:{send-events-7771ce15-3561-40b3-ab9f-763db3ac7786 events-1259 /api/v1/namespaces/events-1259/pods/send-events-7771ce15-3561-40b3-ab9f-763db3ac7786 90a2f2cf-81bd-4e2d-9956-758c74daabd4 8944520 0 2020-04-18 00:46:11 +0000 UTC map[name:foo time:846367007] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mwnm7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mwnm7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mwnm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:46:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:46:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:46:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-18 00:46:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.81,StartTime:2020-04-18 00:46:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-18 00:46:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://1994e13ad3d43a99b8645fd1a685fc1b7fd59fb33da8943865fcb3085f5a1bb8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 18 00:46:21.879: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 18 00:46:24.442: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:46:24.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1259" for this suite. • [SLOW TEST:12.675 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":242,"skipped":4118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:46:24.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 18 00:46:24.539: INFO: >>> kubeConfig: /root/.kube/config Apr 18 00:46:27.539: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:46:38.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8833" for this suite. • [SLOW TEST:13.630 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":243,"skipped":4145,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:46:38.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:46:38.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eeee8fec-9d78-4ef6-be38-4d75799d9f52" in namespace "downward-api-5441" to be "Succeeded or Failed" Apr 18 00:46:38.174: INFO: Pod "downwardapi-volume-eeee8fec-9d78-4ef6-be38-4d75799d9f52": Phase="Pending", Reason="", readiness=false. Elapsed: 3.152929ms Apr 18 00:46:40.257: INFO: Pod "downwardapi-volume-eeee8fec-9d78-4ef6-be38-4d75799d9f52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08558462s Apr 18 00:46:42.261: INFO: Pod "downwardapi-volume-eeee8fec-9d78-4ef6-be38-4d75799d9f52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090210411s STEP: Saw pod success Apr 18 00:46:42.261: INFO: Pod "downwardapi-volume-eeee8fec-9d78-4ef6-be38-4d75799d9f52" satisfied condition "Succeeded or Failed" Apr 18 00:46:42.264: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-eeee8fec-9d78-4ef6-be38-4d75799d9f52 container client-container: STEP: delete the pod Apr 18 00:46:42.300: INFO: Waiting for pod downwardapi-volume-eeee8fec-9d78-4ef6-be38-4d75799d9f52 to disappear Apr 18 00:46:42.338: INFO: Pod downwardapi-volume-eeee8fec-9d78-4ef6-be38-4d75799d9f52 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:46:42.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5441" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4162,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:46:42.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:46:42.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4219" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":245,"skipped":4179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:46:42.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-ecfd2e5d-8048-4628-853a-11b322e78b25 Apr 18 00:46:42.543: INFO: Pod name my-hostname-basic-ecfd2e5d-8048-4628-853a-11b322e78b25: Found 0 pods out of 1 Apr 18 00:46:47.568: INFO: Pod name my-hostname-basic-ecfd2e5d-8048-4628-853a-11b322e78b25: Found 1 pods out of 1 Apr 18 00:46:47.568: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ecfd2e5d-8048-4628-853a-11b322e78b25" are running Apr 18 00:46:47.572: INFO: Pod "my-hostname-basic-ecfd2e5d-8048-4628-853a-11b322e78b25-wh8sc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-18 00:46:42 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-18 00:46:45 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-18 00:46:45 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-18 00:46:42 +0000 UTC Reason: Message:}]) Apr 18 00:46:47.572: INFO: Trying to dial the pod Apr 18 00:46:52.583: INFO: Controller my-hostname-basic-ecfd2e5d-8048-4628-853a-11b322e78b25: Got expected result from replica 1 [my-hostname-basic-ecfd2e5d-8048-4628-853a-11b322e78b25-wh8sc]: "my-hostname-basic-ecfd2e5d-8048-4628-853a-11b322e78b25-wh8sc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:46:52.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4189" for this suite. • [SLOW TEST:10.146 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":246,"skipped":4232,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:46:52.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 18 00:46:52.646: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e38727a-3937-4f00-a51e-37c86bd5f228" in namespace "projected-3947" to be "Succeeded or Failed" Apr 18 00:46:52.650: INFO: Pod "downwardapi-volume-8e38727a-3937-4f00-a51e-37c86bd5f228": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041408ms Apr 18 00:46:54.760: INFO: Pod "downwardapi-volume-8e38727a-3937-4f00-a51e-37c86bd5f228": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114015123s Apr 18 00:46:56.764: INFO: Pod "downwardapi-volume-8e38727a-3937-4f00-a51e-37c86bd5f228": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118280476s STEP: Saw pod success Apr 18 00:46:56.764: INFO: Pod "downwardapi-volume-8e38727a-3937-4f00-a51e-37c86bd5f228" satisfied condition "Succeeded or Failed" Apr 18 00:46:56.767: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8e38727a-3937-4f00-a51e-37c86bd5f228 container client-container: STEP: delete the pod Apr 18 00:46:56.814: INFO: Waiting for pod downwardapi-volume-8e38727a-3937-4f00-a51e-37c86bd5f228 to disappear Apr 18 00:46:56.879: INFO: Pod downwardapi-volume-8e38727a-3937-4f00-a51e-37c86bd5f228 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:46:56.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3947" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4241,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:46:56.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:47:00.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7083" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4251,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:47:00.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0418 00:47:05.631506 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 18 00:47:05.631: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:47:05.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2581" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":249,"skipped":4269,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:47:05.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 18 00:47:10.265: INFO: Successfully updated pod "labelsupdateb0240ddd-872f-4115-90e5-3bb0ba762c31" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:47:14.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6812" for this suite. • [SLOW TEST:8.677 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:47:14.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 18 00:47:14.388: INFO: Waiting up to 5m0s for pod "pod-a824cac0-5b84-4d56-b993-9b3710cdb39a" in namespace "emptydir-5274" to be "Succeeded or Failed" Apr 18 00:47:14.397: INFO: Pod "pod-a824cac0-5b84-4d56-b993-9b3710cdb39a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.37322ms Apr 18 00:47:16.400: INFO: Pod "pod-a824cac0-5b84-4d56-b993-9b3710cdb39a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012240503s Apr 18 00:47:18.405: INFO: Pod "pod-a824cac0-5b84-4d56-b993-9b3710cdb39a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016775186s STEP: Saw pod success Apr 18 00:47:18.405: INFO: Pod "pod-a824cac0-5b84-4d56-b993-9b3710cdb39a" satisfied condition "Succeeded or Failed" Apr 18 00:47:18.408: INFO: Trying to get logs from node latest-worker2 pod pod-a824cac0-5b84-4d56-b993-9b3710cdb39a container test-container: STEP: delete the pod Apr 18 00:47:18.450: INFO: Waiting for pod pod-a824cac0-5b84-4d56-b993-9b3710cdb39a to disappear Apr 18 00:47:18.457: INFO: Pod pod-a824cac0-5b84-4d56-b993-9b3710cdb39a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:47:18.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5274" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4317,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:47:18.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 18 00:47:18.537: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4406 /api/v1/namespaces/watch-4406/configmaps/e2e-watch-test-watch-closed 4b64dc1e-4d5e-41af-9920-85dd657d6696 8944939 0 2020-04-18 00:47:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 18 00:47:18.537: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4406 /api/v1/namespaces/watch-4406/configmaps/e2e-watch-test-watch-closed 4b64dc1e-4d5e-41af-9920-85dd657d6696 8944940 0 2020-04-18 00:47:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 18 00:47:18.589: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4406 /api/v1/namespaces/watch-4406/configmaps/e2e-watch-test-watch-closed 4b64dc1e-4d5e-41af-9920-85dd657d6696 8944941 0 2020-04-18 00:47:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 18 00:47:18.589: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4406 /api/v1/namespaces/watch-4406/configmaps/e2e-watch-test-watch-closed 4b64dc1e-4d5e-41af-9920-85dd657d6696 8944943 0 2020-04-18 00:47:18 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:47:18.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4406" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":252,"skipped":4325,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:47:18.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:47:18.673: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 18 00:47:18.680: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:18.684: INFO: Number of nodes with available pods: 0 Apr 18 00:47:18.684: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:47:19.689: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:19.692: INFO: Number of nodes with available pods: 0 Apr 18 00:47:19.692: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:47:20.786: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:20.811: INFO: Number of nodes with available pods: 0 Apr 18 00:47:20.811: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:47:21.695: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:21.697: INFO: Number of nodes with available pods: 1 Apr 18 00:47:21.697: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:47:22.690: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:22.694: INFO: Number of nodes with available pods: 2 Apr 18 00:47:22.694: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 18 00:47:22.725: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:22.725: INFO: Wrong image for pod: daemon-set-v9mct. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:22.745: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:23.881: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:23.881: INFO: Wrong image for pod: daemon-set-v9mct. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:24.013: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:24.748: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:24.748: INFO: Wrong image for pod: daemon-set-v9mct. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:24.748: INFO: Pod daemon-set-v9mct is not available Apr 18 00:47:24.752: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:25.749: INFO: Pod daemon-set-4265s is not available Apr 18 00:47:25.749: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:25.753: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:26.749: INFO: Pod daemon-set-4265s is not available Apr 18 00:47:26.749: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:26.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:28.103: INFO: Pod daemon-set-4265s is not available Apr 18 00:47:28.103: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:28.131: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:28.749: INFO: Pod daemon-set-4265s is not available Apr 18 00:47:28.749: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:28.753: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:29.750: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:29.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:30.749: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:30.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:31.750: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:31.750: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:31.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:32.747: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:32.747: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:32.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:33.750: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:33.750: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:33.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:34.749: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:34.749: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:34.753: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:35.750: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:35.750: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:35.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:36.749: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:36.749: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:36.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:37.749: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:37.749: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:37.753: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:38.750: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:38.750: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:38.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:39.749: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:39.750: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:39.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:40.748: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:40.748: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:40.753: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:41.749: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:41.749: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:41.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:42.754: INFO: Wrong image for pod: daemon-set-8lhm4. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 18 00:47:42.754: INFO: Pod daemon-set-8lhm4 is not available Apr 18 00:47:42.782: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:43.749: INFO: Pod daemon-set-7tmmc is not available Apr 18 00:47:43.753: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 18 00:47:43.758: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:43.761: INFO: Number of nodes with available pods: 1 Apr 18 00:47:43.761: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:47:44.764: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:44.826: INFO: Number of nodes with available pods: 1 Apr 18 00:47:44.826: INFO: Node latest-worker is running more than one daemon pod Apr 18 00:47:45.766: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 00:47:45.770: INFO: Number of nodes with available pods: 2 Apr 18 00:47:45.770: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7258, will wait for the garbage collector to delete the pods Apr 18 00:47:45.844: INFO: Deleting DaemonSet.extensions daemon-set took: 6.901333ms Apr 18 00:47:46.144: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.219781ms Apr 18 00:47:52.747: INFO: Number of nodes with available pods: 0 Apr 18 00:47:52.747: INFO: Number of running nodes: 0, number of available pods: 0 Apr 18 00:47:52.749: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7258/daemonsets","resourceVersion":"8945151"},"items":null} Apr 18 00:47:52.751: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7258/pods","resourceVersion":"8945151"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:47:52.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7258" for this suite. • [SLOW TEST:34.164 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":253,"skipped":4331,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:47:52.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-0f524710-ca49-42f0-9ee5-8a56a04d71d4 STEP: Creating a pod to test consume configMaps Apr 18 00:47:52.846: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fa209131-8b3a-4853-80d3-9792276e2ac3" in namespace "projected-2756" to be "Succeeded or Failed" Apr 18 00:47:52.864: INFO: Pod "pod-projected-configmaps-fa209131-8b3a-4853-80d3-9792276e2ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.817579ms Apr 18 00:47:54.868: INFO: Pod "pod-projected-configmaps-fa209131-8b3a-4853-80d3-9792276e2ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022296896s Apr 18 00:47:56.872: INFO: Pod "pod-projected-configmaps-fa209131-8b3a-4853-80d3-9792276e2ac3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026230762s STEP: Saw pod success Apr 18 00:47:56.872: INFO: Pod "pod-projected-configmaps-fa209131-8b3a-4853-80d3-9792276e2ac3" satisfied condition "Succeeded or Failed" Apr 18 00:47:56.875: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-fa209131-8b3a-4853-80d3-9792276e2ac3 container projected-configmap-volume-test: STEP: delete the pod Apr 18 00:47:56.908: INFO: Waiting for pod pod-projected-configmaps-fa209131-8b3a-4853-80d3-9792276e2ac3 to disappear Apr 18 00:47:56.921: INFO: Pod pod-projected-configmaps-fa209131-8b3a-4853-80d3-9792276e2ac3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:47:56.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2756" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4341,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:47:56.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:47:57.698: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:47:59.710: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767677, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767677, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767677, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722767677, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:48:02.767: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:48:02.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2898-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:48:03.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8765" for this suite. STEP: Destroying namespace "webhook-8765-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.021 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":255,"skipped":4356,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:48:03.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 18 00:48:04.001: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:48:19.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-743" for this suite. • [SLOW TEST:15.328 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":256,"skipped":4370,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:48:19.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:48:19.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-614" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":257,"skipped":4391,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:48:19.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8720.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8720.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8720.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8720.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 18 00:48:25.478: INFO: DNS probes using dns-test-6fedb8a6-1c41-4337-bdc7-69b33d42401e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8720.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8720.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8720.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8720.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 18 00:48:33.593: INFO: File wheezy_udp@dns-test-service-3.dns-8720.svc.cluster.local from pod dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 18 00:48:33.596: INFO: File jessie_udp@dns-test-service-3.dns-8720.svc.cluster.local from pod dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 18 00:48:33.596: INFO: Lookups using dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 failed for: [wheezy_udp@dns-test-service-3.dns-8720.svc.cluster.local jessie_udp@dns-test-service-3.dns-8720.svc.cluster.local] Apr 18 00:48:38.601: INFO: File wheezy_udp@dns-test-service-3.dns-8720.svc.cluster.local from pod dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 18 00:48:38.605: INFO: File jessie_udp@dns-test-service-3.dns-8720.svc.cluster.local from pod dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 18 00:48:38.605: INFO: Lookups using dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 failed for: [wheezy_udp@dns-test-service-3.dns-8720.svc.cluster.local jessie_udp@dns-test-service-3.dns-8720.svc.cluster.local] Apr 18 00:48:43.601: INFO: File wheezy_udp@dns-test-service-3.dns-8720.svc.cluster.local from pod dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 18 00:48:43.605: INFO: File jessie_udp@dns-test-service-3.dns-8720.svc.cluster.local from pod dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 18 00:48:43.606: INFO: Lookups using dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 failed for: [wheezy_udp@dns-test-service-3.dns-8720.svc.cluster.local jessie_udp@dns-test-service-3.dns-8720.svc.cluster.local] Apr 18 00:48:48.601: INFO: File wheezy_udp@dns-test-service-3.dns-8720.svc.cluster.local from pod dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 18 00:48:48.605: INFO: File jessie_udp@dns-test-service-3.dns-8720.svc.cluster.local from pod dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 18 00:48:48.605: INFO: Lookups using dns-8720/dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 failed for: [wheezy_udp@dns-test-service-3.dns-8720.svc.cluster.local jessie_udp@dns-test-service-3.dns-8720.svc.cluster.local] Apr 18 00:48:53.605: INFO: DNS probes using dns-test-6cc0f3fb-f507-46d9-9607-d67167107a95 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8720.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8720.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8720.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8720.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 18 00:49:00.005: INFO: DNS probes using dns-test-15c933eb-3f12-4c75-a1d0-b8f8c0641bbf succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:49:00.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8720" for this suite. • [SLOW TEST:40.728 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":258,"skipped":4398,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:49:00.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-f6e05dce-4536-4837-b8c7-edb865f5864a STEP: Creating a pod to test consume secrets Apr 18 00:49:00.154: INFO: Waiting up to 5m0s for pod "pod-secrets-d78b8016-6945-4875-9720-62106450d2b7" in namespace "secrets-2365" to be "Succeeded or Failed" Apr 18 00:49:00.379: INFO: Pod "pod-secrets-d78b8016-6945-4875-9720-62106450d2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 224.418773ms Apr 18 00:49:02.383: INFO: Pod "pod-secrets-d78b8016-6945-4875-9720-62106450d2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228569894s Apr 18 00:49:04.387: INFO: Pod "pod-secrets-d78b8016-6945-4875-9720-62106450d2b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.232838726s STEP: Saw pod success Apr 18 00:49:04.387: INFO: Pod "pod-secrets-d78b8016-6945-4875-9720-62106450d2b7" satisfied condition "Succeeded or Failed" Apr 18 00:49:04.391: INFO: Trying to get logs from node latest-worker pod pod-secrets-d78b8016-6945-4875-9720-62106450d2b7 container secret-volume-test: STEP: delete the pod Apr 18 00:49:04.429: INFO: Waiting for pod pod-secrets-d78b8016-6945-4875-9720-62106450d2b7 to disappear Apr 18 00:49:04.463: INFO: Pod pod-secrets-d78b8016-6945-4875-9720-62106450d2b7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:49:04.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2365" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:49:04.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-0e287470-aae0-4eae-91a9-5960f6e15265 STEP: Creating a pod to test consume secrets Apr 18 00:49:04.603: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-19c590bf-7d45-42ac-87aa-6bc2f4c36921" in namespace "projected-9690" to be "Succeeded or Failed" Apr 18 00:49:04.610: INFO: Pod "pod-projected-secrets-19c590bf-7d45-42ac-87aa-6bc2f4c36921": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251328ms Apr 18 00:49:06.618: INFO: Pod "pod-projected-secrets-19c590bf-7d45-42ac-87aa-6bc2f4c36921": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014988238s Apr 18 00:49:08.624: INFO: Pod "pod-projected-secrets-19c590bf-7d45-42ac-87aa-6bc2f4c36921": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020644999s STEP: Saw pod success Apr 18 00:49:08.624: INFO: Pod "pod-projected-secrets-19c590bf-7d45-42ac-87aa-6bc2f4c36921" satisfied condition "Succeeded or Failed" Apr 18 00:49:08.626: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-19c590bf-7d45-42ac-87aa-6bc2f4c36921 container projected-secret-volume-test: STEP: delete the pod Apr 18 00:49:08.677: INFO: Waiting for pod pod-projected-secrets-19c590bf-7d45-42ac-87aa-6bc2f4c36921 to disappear Apr 18 00:49:08.714: INFO: Pod pod-projected-secrets-19c590bf-7d45-42ac-87aa-6bc2f4c36921 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:49:08.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9690" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4455,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:49:08.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6367 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6367 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6367 Apr 18 00:49:08.795: INFO: Found 0 stateful pods, waiting for 1 Apr 18 00:49:18.800: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 18 00:49:18.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6367 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 18 00:49:19.086: INFO: stderr: "I0418 00:49:18.954729 2613 log.go:172] (0xc00003b550) (0xc0005d0140) Create stream\nI0418 00:49:18.954784 2613 log.go:172] (0xc00003b550) (0xc0005d0140) Stream added, broadcasting: 1\nI0418 00:49:18.957558 2613 log.go:172] (0xc00003b550) Reply frame received for 1\nI0418 00:49:18.957628 2613 log.go:172] (0xc00003b550) (0xc000626000) Create stream\nI0418 00:49:18.957659 2613 log.go:172] (0xc00003b550) (0xc000626000) Stream added, broadcasting: 3\nI0418 00:49:18.958764 2613 log.go:172] (0xc00003b550) Reply frame received for 3\nI0418 00:49:18.958790 2613 log.go:172] (0xc00003b550) (0xc0005d01e0) Create stream\nI0418 00:49:18.958799 2613 log.go:172] (0xc00003b550) (0xc0005d01e0) Stream added, broadcasting: 5\nI0418 00:49:18.959908 2613 log.go:172] (0xc00003b550) Reply frame received for 5\nI0418 00:49:19.045432 2613 log.go:172] (0xc00003b550) Data frame received for 5\nI0418 00:49:19.045463 2613 log.go:172] (0xc0005d01e0) (5) Data frame handling\nI0418 00:49:19.045479 2613 log.go:172] (0xc0005d01e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0418 00:49:19.077672 2613 log.go:172] (0xc00003b550) Data frame received for 3\nI0418 00:49:19.077712 2613 log.go:172] (0xc000626000) (3) Data frame handling\nI0418 00:49:19.077736 2613 log.go:172] (0xc000626000) (3) Data frame sent\nI0418 00:49:19.078047 2613 log.go:172] (0xc00003b550) Data frame received for 5\nI0418 00:49:19.078088 2613 log.go:172] (0xc0005d01e0) (5) Data frame handling\nI0418 00:49:19.078189 2613 log.go:172] (0xc00003b550) Data frame received for 3\nI0418 00:49:19.078218 2613 log.go:172] (0xc000626000) (3) Data frame handling\nI0418 00:49:19.080473 2613 log.go:172] (0xc00003b550) Data frame received for 1\nI0418 00:49:19.080519 2613 log.go:172] (0xc0005d0140) (1) Data frame handling\nI0418 00:49:19.080558 2613 log.go:172] (0xc0005d0140) (1) Data frame sent\nI0418 00:49:19.080783 2613 log.go:172] (0xc00003b550) (0xc0005d0140) Stream removed, broadcasting: 1\nI0418 00:49:19.081072 2613 log.go:172] (0xc00003b550) Go away received\nI0418 00:49:19.081677 2613 log.go:172] (0xc00003b550) (0xc0005d0140) Stream removed, broadcasting: 1\nI0418 00:49:19.081725 2613 log.go:172] (0xc00003b550) (0xc000626000) Stream removed, broadcasting: 3\nI0418 00:49:19.081743 2613 log.go:172] (0xc00003b550) (0xc0005d01e0) Stream removed, broadcasting: 5\n" Apr 18 00:49:19.086: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 18 00:49:19.086: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 18 00:49:19.089: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 18 00:49:29.094: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 18 00:49:29.094: INFO: Waiting for statefulset status.replicas updated to 0 Apr 18 00:49:29.125: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999479s Apr 18 00:49:30.129: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.976469503s Apr 18 00:49:31.133: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.972006365s Apr 18 00:49:32.137: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.968510726s Apr 18 00:49:33.151: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.964387839s Apr 18 00:49:34.155: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.950350167s Apr 18 00:49:35.159: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.946111603s Apr 18 00:49:36.163: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.942714252s Apr 18 00:49:37.166: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.93892126s Apr 18 00:49:38.170: INFO: Verifying statefulset ss doesn't scale past 1 for another 935.884489ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6367 Apr 18 00:49:39.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6367 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 18 00:49:39.431: INFO: stderr: "I0418 00:49:39.329311 2635 log.go:172] (0xc00003a0b0) (0xc000a0c000) Create stream\nI0418 00:49:39.329371 2635 log.go:172] (0xc00003a0b0) (0xc000a0c000) Stream added, broadcasting: 1\nI0418 00:49:39.331943 2635 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0418 00:49:39.332007 2635 log.go:172] (0xc00003a0b0) (0xc0002da5a0) Create stream\nI0418 00:49:39.332028 2635 log.go:172] (0xc00003a0b0) (0xc0002da5a0) Stream added, broadcasting: 3\nI0418 00:49:39.333065 2635 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0418 00:49:39.333096 2635 log.go:172] (0xc00003a0b0) (0xc00061b400) Create stream\nI0418 00:49:39.333105 2635 log.go:172] (0xc00003a0b0) (0xc00061b400) Stream added, broadcasting: 5\nI0418 00:49:39.334196 2635 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0418 00:49:39.425813 2635 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0418 00:49:39.425850 2635 log.go:172] (0xc0002da5a0) (3) Data frame handling\nI0418 00:49:39.425871 2635 log.go:172] (0xc0002da5a0) (3) Data frame sent\nI0418 00:49:39.425885 2635 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0418 00:49:39.425898 2635 log.go:172] (0xc0002da5a0) (3) Data frame handling\nI0418 00:49:39.426065 2635 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0418 00:49:39.426090 2635 log.go:172] (0xc00061b400) (5) Data frame handling\nI0418 00:49:39.426112 2635 log.go:172] (0xc00061b400) (5) Data frame sent\nI0418 00:49:39.426124 2635 log.go:172] (0xc00003a0b0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0418 00:49:39.426135 2635 log.go:172] (0xc00061b400) (5) Data frame handling\nI0418 00:49:39.427534 2635 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0418 00:49:39.427550 2635 log.go:172] (0xc000a0c000) (1) Data frame handling\nI0418 00:49:39.427558 2635 log.go:172] (0xc000a0c000) (1) Data frame sent\nI0418 00:49:39.427567 2635 log.go:172] (0xc00003a0b0) (0xc000a0c000) Stream removed, broadcasting: 1\nI0418 00:49:39.427583 2635 log.go:172] (0xc00003a0b0) Go away received\nI0418 00:49:39.427882 2635 log.go:172] (0xc00003a0b0) (0xc000a0c000) Stream removed, broadcasting: 1\nI0418 00:49:39.427897 2635 log.go:172] (0xc00003a0b0) (0xc0002da5a0) Stream removed, broadcasting: 3\nI0418 00:49:39.427903 2635 log.go:172] (0xc00003a0b0) (0xc00061b400) Stream removed, broadcasting: 5\n" Apr 18 00:49:39.431: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 18 00:49:39.431: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 18 00:49:39.434: INFO: Found 1 stateful pods, waiting for 3 Apr 18 00:49:49.438: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 18 00:49:49.438: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 18 00:49:49.438: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 18 00:49:49.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6367 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 18 00:49:49.676: INFO: stderr: "I0418 00:49:49.575942 2655 log.go:172] (0xc0006a6210) (0xc00044aa00) Create stream\nI0418 00:49:49.575994 2655 log.go:172] (0xc0006a6210) (0xc00044aa00) Stream added, broadcasting: 1\nI0418 00:49:49.578696 2655 log.go:172] (0xc0006a6210) Reply frame received for 1\nI0418 00:49:49.578741 2655 log.go:172] (0xc0006a6210) (0xc0009d8000) Create stream\nI0418 00:49:49.578756 2655 log.go:172] (0xc0006a6210) (0xc0009d8000) Stream added, broadcasting: 3\nI0418 00:49:49.579810 2655 log.go:172] (0xc0006a6210) Reply frame received for 3\nI0418 00:49:49.579875 2655 log.go:172] (0xc0006a6210) (0xc000914000) Create stream\nI0418 00:49:49.579901 2655 log.go:172] (0xc0006a6210) (0xc000914000) Stream added, broadcasting: 5\nI0418 00:49:49.581013 2655 log.go:172] (0xc0006a6210) Reply frame received for 5\nI0418 00:49:49.669258 2655 log.go:172] (0xc0006a6210) Data frame received for 3\nI0418 00:49:49.669299 2655 log.go:172] (0xc0009d8000) (3) Data frame handling\nI0418 00:49:49.669318 2655 log.go:172] (0xc0009d8000) (3) Data frame sent\nI0418 00:49:49.669330 2655 log.go:172] (0xc0006a6210) Data frame received for 3\nI0418 00:49:49.669341 2655 log.go:172] (0xc0009d8000) (3) Data frame handling\nI0418 00:49:49.669381 2655 log.go:172] (0xc0006a6210) Data frame received for 5\nI0418 00:49:49.669422 2655 log.go:172] (0xc000914000) (5) Data frame handling\nI0418 00:49:49.669457 2655 log.go:172] (0xc000914000) (5) Data frame sent\nI0418 00:49:49.669484 2655 log.go:172] (0xc0006a6210) Data frame received for 5\nI0418 00:49:49.669531 2655 log.go:172] (0xc000914000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0418 00:49:49.671022 2655 log.go:172] (0xc0006a6210) Data frame received for 1\nI0418 00:49:49.671045 2655 log.go:172] (0xc00044aa00) (1) Data frame handling\nI0418 00:49:49.671060 2655 log.go:172] (0xc00044aa00) (1) Data frame sent\nI0418 00:49:49.671082 2655 log.go:172] (0xc0006a6210) (0xc00044aa00) Stream removed, broadcasting: 1\nI0418 00:49:49.671104 2655 log.go:172] (0xc0006a6210) Go away received\nI0418 00:49:49.671479 2655 log.go:172] (0xc0006a6210) (0xc00044aa00) Stream removed, broadcasting: 1\nI0418 00:49:49.671503 2655 log.go:172] (0xc0006a6210) (0xc0009d8000) Stream removed, broadcasting: 3\nI0418 00:49:49.671516 2655 log.go:172] (0xc0006a6210) (0xc000914000) Stream removed, broadcasting: 5\n" Apr 18 00:49:49.676: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 18 00:49:49.676: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 18 00:49:49.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6367 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 18 00:49:49.923: INFO: stderr: "I0418 00:49:49.798400 2674 log.go:172] (0xc0000e06e0) (0xc0007a9720) Create stream\nI0418 00:49:49.798452 2674 log.go:172] (0xc0000e06e0) (0xc0007a9720) Stream added, broadcasting: 1\nI0418 00:49:49.800324 2674 log.go:172] (0xc0000e06e0) Reply frame received for 1\nI0418 00:49:49.800351 2674 log.go:172] (0xc0000e06e0) (0xc000629720) Create stream\nI0418 00:49:49.800359 2674 log.go:172] (0xc0000e06e0) (0xc000629720) Stream added, broadcasting: 3\nI0418 00:49:49.801027 2674 log.go:172] (0xc0000e06e0) Reply frame received for 3\nI0418 00:49:49.801087 2674 log.go:172] (0xc0000e06e0) (0xc0004f2b40) Create stream\nI0418 00:49:49.801104 2674 log.go:172] (0xc0000e06e0) (0xc0004f2b40) Stream added, broadcasting: 5\nI0418 00:49:49.801943 2674 log.go:172] (0xc0000e06e0) Reply frame received for 5\nI0418 00:49:49.863452 2674 log.go:172] (0xc0000e06e0) Data frame received for 5\nI0418 00:49:49.863491 2674 log.go:172] (0xc0004f2b40) (5) Data frame handling\nI0418 00:49:49.863527 2674 log.go:172] (0xc0004f2b40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0418 00:49:49.914929 2674 log.go:172] (0xc0000e06e0) Data frame received for 3\nI0418 00:49:49.914962 2674 log.go:172] (0xc000629720) (3) Data frame handling\nI0418 00:49:49.914985 2674 log.go:172] (0xc000629720) (3) Data frame sent\nI0418 00:49:49.915280 2674 log.go:172] (0xc0000e06e0) Data frame received for 3\nI0418 00:49:49.915311 2674 log.go:172] (0xc000629720) (3) Data frame handling\nI0418 00:49:49.915335 2674 log.go:172] (0xc0000e06e0) Data frame received for 5\nI0418 00:49:49.915349 2674 log.go:172] (0xc0004f2b40) (5) Data frame handling\nI0418 00:49:49.916822 2674 log.go:172] (0xc0000e06e0) Data frame received for 1\nI0418 00:49:49.916847 2674 log.go:172] (0xc0007a9720) (1) Data frame handling\nI0418 00:49:49.916870 2674 log.go:172] (0xc0007a9720) (1) Data frame sent\nI0418 00:49:49.916886 2674 log.go:172] (0xc0000e06e0) (0xc0007a9720) Stream removed, broadcasting: 1\nI0418 00:49:49.916912 2674 log.go:172] (0xc0000e06e0) Go away received\nI0418 00:49:49.917577 2674 log.go:172] (0xc0000e06e0) (0xc0007a9720) Stream removed, broadcasting: 1\nI0418 00:49:49.917608 2674 log.go:172] (0xc0000e06e0) (0xc000629720) Stream removed, broadcasting: 3\nI0418 00:49:49.917623 2674 log.go:172] (0xc0000e06e0) (0xc0004f2b40) Stream removed, broadcasting: 5\n" Apr 18 00:49:49.923: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 18 00:49:49.923: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 18 00:49:49.923: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6367 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 18 00:49:50.149: INFO: stderr: "I0418 00:49:50.053204 2696 log.go:172] (0xc00003a0b0) (0xc000a4a000) Create stream\nI0418 00:49:50.053258 2696 log.go:172] (0xc00003a0b0) (0xc000a4a000) Stream added, broadcasting: 1\nI0418 00:49:50.055053 2696 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0418 00:49:50.055096 2696 log.go:172] (0xc00003a0b0) (0xc000a4a0a0) Create stream\nI0418 00:49:50.055111 2696 log.go:172] (0xc00003a0b0) (0xc000a4a0a0) Stream added, broadcasting: 3\nI0418 00:49:50.056093 2696 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0418 00:49:50.056128 2696 log.go:172] (0xc00003a0b0) (0xc00082d180) Create stream\nI0418 00:49:50.056142 2696 log.go:172] (0xc00003a0b0) (0xc00082d180) Stream added, broadcasting: 5\nI0418 00:49:50.056962 2696 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0418 00:49:50.113544 2696 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0418 00:49:50.113573 2696 log.go:172] (0xc00082d180) (5) Data frame handling\nI0418 00:49:50.113593 2696 log.go:172] (0xc00082d180) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0418 00:49:50.142508 2696 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0418 00:49:50.142557 2696 log.go:172] (0xc000a4a0a0) (3) Data frame handling\nI0418 00:49:50.142574 2696 log.go:172] (0xc000a4a0a0) (3) Data frame sent\nI0418 00:49:50.142590 2696 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0418 00:49:50.142607 2696 log.go:172] (0xc000a4a0a0) (3) Data frame handling\nI0418 00:49:50.143120 2696 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0418 00:49:50.143135 2696 log.go:172] (0xc00082d180) (5) Data frame handling\nI0418 00:49:50.144751 2696 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0418 00:49:50.144774 2696 log.go:172] (0xc000a4a000) (1) Data frame handling\nI0418 00:49:50.144784 2696 log.go:172] (0xc000a4a000) (1) Data frame sent\nI0418 00:49:50.144925 2696 log.go:172] (0xc00003a0b0) (0xc000a4a000) Stream removed, broadcasting: 1\nI0418 00:49:50.145391 2696 log.go:172] (0xc00003a0b0) Go away received\nI0418 00:49:50.145680 2696 log.go:172] (0xc00003a0b0) (0xc000a4a000) Stream removed, broadcasting: 1\nI0418 00:49:50.145701 2696 log.go:172] (0xc00003a0b0) (0xc000a4a0a0) Stream removed, broadcasting: 3\nI0418 00:49:50.145713 2696 log.go:172] (0xc00003a0b0) (0xc00082d180) Stream removed, broadcasting: 5\n" Apr 18 00:49:50.149: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 18 00:49:50.149: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 18 00:49:50.149: INFO: Waiting for statefulset status.replicas updated to 0 Apr 18 00:49:50.153: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 18 00:50:00.160: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 18 00:50:00.160: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 18 00:50:00.160: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 18 00:50:00.173: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999449s Apr 18 00:50:01.178: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993301195s Apr 18 00:50:02.183: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988919818s Apr 18 00:50:03.195: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983528646s Apr 18 00:50:04.200: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.972059906s Apr 18 00:50:05.205: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.966801277s Apr 18 00:50:06.210: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.961916552s Apr 18 00:50:07.215: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.956810484s Apr 18 00:50:08.220: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.951920418s Apr 18 00:50:09.224: INFO: Verifying statefulset ss doesn't scale past 3 for another 946.952115ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6367 Apr 18 00:50:10.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6367 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 18 00:50:10.449: INFO: stderr: "I0418 00:50:10.349483 2717 log.go:172] (0xc0006b6a50) (0xc0005ff360) Create stream\nI0418 00:50:10.349538 2717 log.go:172] (0xc0006b6a50) (0xc0005ff360) Stream added, broadcasting: 1\nI0418 00:50:10.352066 2717 log.go:172] (0xc0006b6a50) Reply frame received for 1\nI0418 00:50:10.352100 2717 log.go:172] (0xc0006b6a50) (0xc0006cc000) Create stream\nI0418 00:50:10.352128 2717 log.go:172] (0xc0006b6a50) (0xc0006cc000) Stream added, broadcasting: 3\nI0418 00:50:10.353069 2717 log.go:172] (0xc0006b6a50) Reply frame received for 3\nI0418 00:50:10.353099 2717 log.go:172] (0xc0006b6a50) (0xc0006cc140) Create stream\nI0418 00:50:10.353205 2717 log.go:172] (0xc0006b6a50) (0xc0006cc140) Stream added, broadcasting: 5\nI0418 00:50:10.354294 2717 log.go:172] (0xc0006b6a50) Reply frame received for 5\nI0418 00:50:10.442470 2717 log.go:172] (0xc0006b6a50) Data frame received for 3\nI0418 00:50:10.442502 2717 log.go:172] (0xc0006cc000) (3) Data frame handling\nI0418 00:50:10.442514 2717 log.go:172] (0xc0006cc000) (3) Data frame sent\nI0418 00:50:10.442522 2717 log.go:172] (0xc0006b6a50) Data frame received for 3\nI0418 00:50:10.442529 2717 log.go:172] (0xc0006cc000) (3) Data frame handling\nI0418 00:50:10.442564 2717 log.go:172] (0xc0006b6a50) Data frame received for 5\nI0418 00:50:10.442590 2717 log.go:172] (0xc0006cc140) (5) Data frame handling\nI0418 00:50:10.442609 2717 log.go:172] (0xc0006cc140) (5) Data frame sent\nI0418 00:50:10.442626 2717 log.go:172] (0xc0006b6a50) Data frame received for 5\nI0418 00:50:10.442637 2717 log.go:172] (0xc0006cc140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0418 00:50:10.444471 2717 log.go:172] (0xc0006b6a50) Data frame received for 1\nI0418 00:50:10.444566 2717 log.go:172] (0xc0005ff360) (1) Data frame handling\nI0418 00:50:10.444588 2717 log.go:172] (0xc0005ff360) (1) Data frame sent\nI0418 00:50:10.444597 2717 log.go:172] (0xc0006b6a50) (0xc0005ff360) Stream removed, broadcasting: 1\nI0418 00:50:10.444612 2717 log.go:172] (0xc0006b6a50) Go away received\nI0418 00:50:10.445054 2717 log.go:172] (0xc0006b6a50) (0xc0005ff360) Stream removed, broadcasting: 1\nI0418 00:50:10.445073 2717 log.go:172] (0xc0006b6a50) (0xc0006cc000) Stream removed, broadcasting: 3\nI0418 00:50:10.445084 2717 log.go:172] (0xc0006b6a50) (0xc0006cc140) Stream removed, broadcasting: 5\n" Apr 18 00:50:10.449: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 18 00:50:10.449: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 18 00:50:10.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6367 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 18 00:50:10.646: INFO: stderr: "I0418 00:50:10.578881 2739 log.go:172] (0xc000b8a790) (0xc000900140) Create stream\nI0418 00:50:10.578925 2739 log.go:172] (0xc000b8a790) (0xc000900140) Stream added, broadcasting: 1\nI0418 00:50:10.581306 2739 log.go:172] (0xc000b8a790) Reply frame received for 1\nI0418 00:50:10.581357 2739 log.go:172] (0xc000b8a790) (0xc0006cb2c0) Create stream\nI0418 00:50:10.581371 2739 log.go:172] (0xc000b8a790) (0xc0006cb2c0) Stream added, broadcasting: 3\nI0418 00:50:10.582130 2739 log.go:172] (0xc000b8a790) Reply frame received for 3\nI0418 00:50:10.582155 2739 log.go:172] (0xc000b8a790) (0xc000426aa0) Create stream\nI0418 00:50:10.582163 2739 log.go:172] (0xc000b8a790) (0xc000426aa0) Stream added, broadcasting: 5\nI0418 00:50:10.582904 2739 log.go:172] (0xc000b8a790) Reply frame received for 5\nI0418 00:50:10.641058 2739 log.go:172] (0xc000b8a790) Data frame received for 3\nI0418 00:50:10.641089 2739 log.go:172] (0xc0006cb2c0) (3) Data frame handling\nI0418 00:50:10.641201 2739 log.go:172] (0xc0006cb2c0) (3) Data frame sent\nI0418 00:50:10.641216 2739 log.go:172] (0xc000b8a790) Data frame received for 3\nI0418 00:50:10.641224 2739 log.go:172] (0xc0006cb2c0) (3) Data frame handling\nI0418 00:50:10.641360 2739 log.go:172] (0xc000b8a790) Data frame received for 5\nI0418 00:50:10.641380 2739 log.go:172] (0xc000426aa0) (5) Data frame handling\nI0418 00:50:10.641393 2739 log.go:172] (0xc000426aa0) (5) Data frame sent\nI0418 00:50:10.641399 2739 log.go:172] (0xc000b8a790) Data frame received for 5\nI0418 00:50:10.641404 2739 log.go:172] (0xc000426aa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0418 00:50:10.643161 2739 log.go:172] (0xc000b8a790) Data frame received for 1\nI0418 00:50:10.643178 2739 log.go:172] (0xc000900140) (1) Data frame handling\nI0418 00:50:10.643196 2739 log.go:172] (0xc000900140) (1) Data frame sent\nI0418 00:50:10.643208 2739 log.go:172] (0xc000b8a790) (0xc000900140) Stream removed, broadcasting: 1\nI0418 00:50:10.643365 2739 log.go:172] (0xc000b8a790) Go away received\nI0418 00:50:10.643463 2739 log.go:172] (0xc000b8a790) (0xc000900140) Stream removed, broadcasting: 1\nI0418 00:50:10.643475 2739 log.go:172] (0xc000b8a790) (0xc0006cb2c0) Stream removed, broadcasting: 3\nI0418 00:50:10.643481 2739 log.go:172] (0xc000b8a790) (0xc000426aa0) Stream removed, broadcasting: 5\n" Apr 18 00:50:10.647: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 18 00:50:10.647: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 18 00:50:10.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6367 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 18 00:50:10.836: INFO: stderr: "I0418 00:50:10.765347 2760 log.go:172] (0xc000a3a210) (0xc000abc140) Create stream\nI0418 00:50:10.765397 2760 log.go:172] (0xc000a3a210) (0xc000abc140) Stream added, broadcasting: 1\nI0418 00:50:10.770290 2760 log.go:172] (0xc000a3a210) Reply frame received for 1\nI0418 00:50:10.770330 2760 log.go:172] (0xc000a3a210) (0xc0003d5720) Create stream\nI0418 00:50:10.770340 2760 log.go:172] (0xc000a3a210) (0xc0003d5720) Stream added, broadcasting: 3\nI0418 00:50:10.771140 2760 log.go:172] (0xc000a3a210) Reply frame received for 3\nI0418 00:50:10.771165 2760 log.go:172] (0xc000a3a210) (0xc00085c780) Create stream\nI0418 00:50:10.771179 2760 log.go:172] (0xc000a3a210) (0xc00085c780) Stream added, broadcasting: 5\nI0418 00:50:10.772037 2760 log.go:172] (0xc000a3a210) Reply frame received for 5\nI0418 00:50:10.829688 2760 log.go:172] (0xc000a3a210) Data frame received for 5\nI0418 00:50:10.829718 2760 log.go:172] (0xc00085c780) (5) Data frame handling\nI0418 00:50:10.829732 2760 log.go:172] (0xc00085c780) (5) Data frame sent\nI0418 00:50:10.829738 2760 log.go:172] (0xc000a3a210) Data frame received for 5\nI0418 00:50:10.829742 2760 log.go:172] (0xc00085c780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0418 00:50:10.829759 2760 log.go:172] (0xc000a3a210) Data frame received for 3\nI0418 00:50:10.829763 2760 log.go:172] (0xc0003d5720) (3) Data frame handling\nI0418 00:50:10.829767 2760 log.go:172] (0xc0003d5720) (3) Data frame sent\nI0418 00:50:10.829771 2760 log.go:172] (0xc000a3a210) Data frame received for 3\nI0418 00:50:10.829775 2760 log.go:172] (0xc0003d5720) (3) Data frame handling\nI0418 00:50:10.831113 2760 log.go:172] (0xc000a3a210) Data frame received for 1\nI0418 00:50:10.831135 2760 log.go:172] (0xc000abc140) (1) Data frame handling\nI0418 00:50:10.831147 2760 log.go:172] (0xc000abc140) (1) Data frame sent\nI0418 00:50:10.831158 2760 log.go:172] (0xc000a3a210) (0xc000abc140) Stream removed, broadcasting: 1\nI0418 00:50:10.831173 2760 log.go:172] (0xc000a3a210) Go away received\nI0418 00:50:10.831552 2760 log.go:172] (0xc000a3a210) (0xc000abc140) Stream removed, broadcasting: 1\nI0418 00:50:10.831589 2760 log.go:172] (0xc000a3a210) (0xc0003d5720) Stream removed, broadcasting: 3\nI0418 00:50:10.831603 2760 log.go:172] (0xc000a3a210) (0xc00085c780) Stream removed, broadcasting: 5\n" Apr 18 00:50:10.836: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 18 00:50:10.836: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 18 00:50:10.836: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 18 00:50:40.896: INFO: Deleting all statefulset in ns statefulset-6367 Apr 18 00:50:40.900: INFO: Scaling statefulset ss to 0 Apr 18 00:50:40.907: INFO: Waiting for statefulset status.replicas updated to 0 Apr 18 00:50:40.909: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:50:40.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6367" for this suite. • [SLOW TEST:92.209 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":261,"skipped":4464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:50:40.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5758.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5758.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5758.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5758.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5758.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5758.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5758.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5758.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5758.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5758.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 78.193.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.193.78_udp@PTR;check="$$(dig +tcp +noall +answer +search 78.193.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.193.78_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5758.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5758.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5758.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5758.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5758.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5758.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5758.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5758.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5758.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5758.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5758.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 78.193.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.193.78_udp@PTR;check="$$(dig +tcp +noall +answer +search 78.193.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.193.78_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 18 00:50:47.081: INFO: Unable to read wheezy_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:47.085: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:47.088: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:47.091: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:47.113: INFO: Unable to read jessie_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:47.115: INFO: Unable to read jessie_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:47.118: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:47.121: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:47.138: INFO: Lookups using dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9 failed for: [wheezy_udp@dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_udp@dns-test-service.dns-5758.svc.cluster.local jessie_tcp@dns-test-service.dns-5758.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local] Apr 18 00:50:52.144: INFO: Unable to read wheezy_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:52.148: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:52.152: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:52.156: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:52.178: INFO: Unable to read jessie_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:52.181: INFO: Unable to read jessie_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:52.183: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:52.186: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:52.204: INFO: Lookups using dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9 failed for: [wheezy_udp@dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_udp@dns-test-service.dns-5758.svc.cluster.local jessie_tcp@dns-test-service.dns-5758.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local] Apr 18 00:50:57.144: INFO: Unable to read wheezy_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:57.148: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:57.151: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:57.155: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:57.177: INFO: Unable to read jessie_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:57.180: INFO: Unable to read jessie_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:57.184: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:57.187: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:50:57.206: INFO: Lookups using dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9 failed for: [wheezy_udp@dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_udp@dns-test-service.dns-5758.svc.cluster.local jessie_tcp@dns-test-service.dns-5758.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local] Apr 18 00:51:02.143: INFO: Unable to read wheezy_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:02.146: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:02.150: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:02.152: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:02.181: INFO: Unable to read jessie_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:02.183: INFO: Unable to read jessie_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:02.186: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:02.189: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:02.207: INFO: Lookups using dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9 failed for: [wheezy_udp@dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_udp@dns-test-service.dns-5758.svc.cluster.local jessie_tcp@dns-test-service.dns-5758.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local] Apr 18 00:51:07.144: INFO: Unable to read wheezy_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:07.148: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:07.152: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:07.156: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:07.178: INFO: Unable to read jessie_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:07.181: INFO: Unable to read jessie_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:07.185: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:07.188: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:07.215: INFO: Lookups using dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9 failed for: [wheezy_udp@dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_udp@dns-test-service.dns-5758.svc.cluster.local jessie_tcp@dns-test-service.dns-5758.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local] Apr 18 00:51:12.144: INFO: Unable to read wheezy_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:12.147: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:12.150: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:12.153: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:12.171: INFO: Unable to read jessie_udp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:12.173: INFO: Unable to read jessie_tcp@dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:12.176: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:12.178: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local from pod dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9: the server could not find the requested resource (get pods dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9) Apr 18 00:51:12.194: INFO: Lookups using dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9 failed for: [wheezy_udp@dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@dns-test-service.dns-5758.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_udp@dns-test-service.dns-5758.svc.cluster.local jessie_tcp@dns-test-service.dns-5758.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5758.svc.cluster.local] Apr 18 00:51:17.203: INFO: DNS probes using dns-5758/dns-test-20c4986d-e369-42da-bd21-ae9f49847aa9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:51:17.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5758" for this suite. • [SLOW TEST:36.909 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":262,"skipped":4493,"failed":0} [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:51:17.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 18 00:51:17.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1595' Apr 18 00:51:18.190: INFO: stderr: "" Apr 18 00:51:18.190: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 18 00:51:18.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1595' Apr 18 00:51:18.298: INFO: stderr: "" Apr 18 00:51:18.298: INFO: stdout: "update-demo-nautilus-5h655 update-demo-nautilus-mf49p " Apr 18 00:51:18.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5h655 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1595' Apr 18 00:51:18.395: INFO: stderr: "" Apr 18 00:51:18.395: INFO: stdout: "" Apr 18 00:51:18.395: INFO: update-demo-nautilus-5h655 is created but not running Apr 18 00:51:23.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1595' Apr 18 00:51:23.502: INFO: stderr: "" Apr 18 00:51:23.502: INFO: stdout: "update-demo-nautilus-5h655 update-demo-nautilus-mf49p " Apr 18 00:51:23.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5h655 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1595' Apr 18 00:51:23.609: INFO: stderr: "" Apr 18 00:51:23.609: INFO: stdout: "true" Apr 18 00:51:23.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5h655 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1595' Apr 18 00:51:23.707: INFO: stderr: "" Apr 18 00:51:23.707: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 18 00:51:23.707: INFO: validating pod update-demo-nautilus-5h655 Apr 18 00:51:23.711: INFO: got data: { "image": "nautilus.jpg" } Apr 18 00:51:23.711: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 18 00:51:23.711: INFO: update-demo-nautilus-5h655 is verified up and running Apr 18 00:51:23.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mf49p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1595' Apr 18 00:51:23.803: INFO: stderr: "" Apr 18 00:51:23.803: INFO: stdout: "true" Apr 18 00:51:23.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mf49p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1595' Apr 18 00:51:23.892: INFO: stderr: "" Apr 18 00:51:23.892: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 18 00:51:23.892: INFO: validating pod update-demo-nautilus-mf49p Apr 18 00:51:23.896: INFO: got data: { "image": "nautilus.jpg" } Apr 18 00:51:23.896: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 18 00:51:23.896: INFO: update-demo-nautilus-mf49p is verified up and running STEP: using delete to clean up resources Apr 18 00:51:23.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1595' Apr 18 00:51:23.997: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 18 00:51:23.997: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 18 00:51:23.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1595' Apr 18 00:51:24.095: INFO: stderr: "No resources found in kubectl-1595 namespace.\n" Apr 18 00:51:24.095: INFO: stdout: "" Apr 18 00:51:24.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1595 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 18 00:51:24.193: INFO: stderr: "" Apr 18 00:51:24.193: INFO: stdout: "update-demo-nautilus-5h655\nupdate-demo-nautilus-mf49p\n" Apr 18 00:51:24.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1595' Apr 18 00:51:24.791: INFO: stderr: "No resources found in kubectl-1595 namespace.\n" Apr 18 00:51:24.791: INFO: stdout: "" Apr 18 00:51:24.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1595 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 18 00:51:24.877: INFO: stderr: "" Apr 18 00:51:24.877: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:51:24.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1595" for this suite. • [SLOW TEST:7.042 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":263,"skipped":4493,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:51:24.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:51:25.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1970' Apr 18 00:51:25.333: INFO: stderr: "" Apr 18 00:51:25.333: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 18 00:51:25.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1970' Apr 18 00:51:25.589: INFO: stderr: "" Apr 18 00:51:25.589: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 18 00:51:26.594: INFO: Selector matched 1 pods for map[app:agnhost] Apr 18 00:51:26.594: INFO: Found 0 / 1 Apr 18 00:51:27.594: INFO: Selector matched 1 pods for map[app:agnhost] Apr 18 00:51:27.594: INFO: Found 0 / 1 Apr 18 00:51:28.594: INFO: Selector matched 1 pods for map[app:agnhost] Apr 18 00:51:28.594: INFO: Found 1 / 1 Apr 18 00:51:28.594: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 18 00:51:28.598: INFO: Selector matched 1 pods for map[app:agnhost] Apr 18 00:51:28.598: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 18 00:51:28.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-c7tvw --namespace=kubectl-1970' Apr 18 00:51:28.714: INFO: stderr: "" Apr 18 00:51:28.714: INFO: stdout: "Name: agnhost-master-c7tvw\nNamespace: kubectl-1970\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Sat, 18 Apr 2020 00:51:25 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.96\nIPs:\n IP: 10.244.2.96\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://d26e804d7e0265db9ca66e429c28cef376d865d022f68f8258d648ea211e8cc7\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 18 Apr 2020 00:51:27 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-c99gg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-c99gg:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-c99gg\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-1970/agnhost-master-c7tvw to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" Apr 18 00:51:28.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1970' Apr 18 00:51:28.838: INFO: stderr: "" Apr 18 00:51:28.838: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1970\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-c7tvw\n" Apr 18 00:51:28.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1970' Apr 18 00:51:28.940: INFO: stderr: "" Apr 18 00:51:28.940: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1970\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.49.126\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.96:6379\nSession Affinity: None\nEvents: \n" Apr 18 00:51:28.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 18 00:51:29.093: INFO: stderr: "" Apr 18 00:51:29.093: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sat, 18 Apr 2020 00:51:21 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 18 Apr 2020 00:49:07 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 18 Apr 2020 00:49:07 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 18 Apr 2020 00:49:07 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 18 Apr 2020 00:49:07 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 33d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 33d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 33d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 33d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 33d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 33d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 33d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 33d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 33d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 18 00:51:29.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-1970' Apr 18 00:51:29.190: INFO: stderr: "" Apr 18 00:51:29.190: INFO: stdout: "Name: kubectl-1970\nLabels: e2e-framework=kubectl\n e2e-run=332747fc-6e99-44e5-8f74-4a45449f9ce7\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:51:29.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1970" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":264,"skipped":4500,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:51:29.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-9437 STEP: creating replication controller nodeport-test in namespace services-9437 I0418 00:51:29.346976 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9437, replica count: 2 I0418 00:51:32.397769 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0418 00:51:35.398051 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 18 00:51:35.398: INFO: Creating new exec pod Apr 18 00:51:40.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9437 execpodbcgfc -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 18 00:51:40.648: INFO: stderr: "I0418 00:51:40.552706 3200 log.go:172] (0xc0000e9ad0) (0xc000910000) Create stream\nI0418 00:51:40.552774 3200 log.go:172] (0xc0000e9ad0) (0xc000910000) Stream added, broadcasting: 1\nI0418 00:51:40.555725 3200 log.go:172] (0xc0000e9ad0) Reply frame received for 1\nI0418 00:51:40.555775 3200 log.go:172] (0xc0000e9ad0) (0xc000701360) Create stream\nI0418 00:51:40.555789 3200 log.go:172] (0xc0000e9ad0) (0xc000701360) Stream added, broadcasting: 3\nI0418 00:51:40.557057 3200 log.go:172] (0xc0000e9ad0) Reply frame received for 3\nI0418 00:51:40.557105 3200 log.go:172] (0xc0000e9ad0) (0xc0001c8000) Create stream\nI0418 00:51:40.557221 3200 log.go:172] (0xc0000e9ad0) (0xc0001c8000) Stream added, broadcasting: 5\nI0418 00:51:40.558213 3200 log.go:172] (0xc0000e9ad0) Reply frame received for 5\nI0418 00:51:40.639021 3200 log.go:172] (0xc0000e9ad0) Data frame received for 5\nI0418 00:51:40.639106 3200 log.go:172] (0xc0001c8000) (5) Data frame handling\nI0418 00:51:40.639132 3200 log.go:172] (0xc0001c8000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0418 00:51:40.639711 3200 log.go:172] (0xc0000e9ad0) Data frame received for 5\nI0418 00:51:40.639729 3200 log.go:172] (0xc0001c8000) (5) Data frame handling\nI0418 00:51:40.639742 3200 log.go:172] (0xc0001c8000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0418 00:51:40.640001 3200 log.go:172] (0xc0000e9ad0) Data frame received for 3\nI0418 00:51:40.640020 3200 log.go:172] (0xc000701360) (3) Data frame handling\nI0418 00:51:40.640391 3200 log.go:172] (0xc0000e9ad0) Data frame received for 5\nI0418 00:51:40.640434 3200 log.go:172] (0xc0001c8000) (5) Data frame handling\nI0418 00:51:40.642430 3200 log.go:172] (0xc0000e9ad0) Data frame received for 1\nI0418 00:51:40.642448 3200 log.go:172] (0xc000910000) (1) Data frame handling\nI0418 00:51:40.642468 3200 log.go:172] (0xc000910000) (1) Data frame sent\nI0418 00:51:40.642483 3200 log.go:172] (0xc0000e9ad0) (0xc000910000) Stream removed, broadcasting: 1\nI0418 00:51:40.642563 3200 log.go:172] (0xc0000e9ad0) Go away received\nI0418 00:51:40.642839 3200 log.go:172] (0xc0000e9ad0) (0xc000910000) Stream removed, broadcasting: 1\nI0418 00:51:40.642856 3200 log.go:172] (0xc0000e9ad0) (0xc000701360) Stream removed, broadcasting: 3\nI0418 00:51:40.642866 3200 log.go:172] (0xc0000e9ad0) (0xc0001c8000) Stream removed, broadcasting: 5\n" Apr 18 00:51:40.648: INFO: stdout: "" Apr 18 00:51:40.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9437 execpodbcgfc -- /bin/sh -x -c nc -zv -t -w 2 10.96.200.62 80' Apr 18 00:51:40.859: INFO: stderr: "I0418 00:51:40.777926 3223 log.go:172] (0xc000984d10) (0xc000a783c0) Create stream\nI0418 00:51:40.777993 3223 log.go:172] (0xc000984d10) (0xc000a783c0) Stream added, broadcasting: 1\nI0418 00:51:40.782625 3223 log.go:172] (0xc000984d10) Reply frame received for 1\nI0418 00:51:40.782674 3223 log.go:172] (0xc000984d10) (0xc000310aa0) Create stream\nI0418 00:51:40.782688 3223 log.go:172] (0xc000984d10) (0xc000310aa0) Stream added, broadcasting: 3\nI0418 00:51:40.783728 3223 log.go:172] (0xc000984d10) Reply frame received for 3\nI0418 00:51:40.783767 3223 log.go:172] (0xc000984d10) (0xc000a78000) Create stream\nI0418 00:51:40.783778 3223 log.go:172] (0xc000984d10) (0xc000a78000) Stream added, broadcasting: 5\nI0418 00:51:40.784800 3223 log.go:172] (0xc000984d10) Reply frame received for 5\nI0418 00:51:40.852383 3223 log.go:172] (0xc000984d10) Data frame received for 3\nI0418 00:51:40.852422 3223 log.go:172] (0xc000310aa0) (3) Data frame handling\nI0418 00:51:40.852450 3223 log.go:172] (0xc000984d10) Data frame received for 5\nI0418 00:51:40.852461 3223 log.go:172] (0xc000a78000) (5) Data frame handling\nI0418 00:51:40.852473 3223 log.go:172] (0xc000a78000) (5) Data frame sent\nI0418 00:51:40.852486 3223 log.go:172] (0xc000984d10) Data frame received for 5\nI0418 00:51:40.852504 3223 log.go:172] (0xc000a78000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.200.62 80\nConnection to 10.96.200.62 80 port [tcp/http] succeeded!\nI0418 00:51:40.854242 3223 log.go:172] (0xc000984d10) Data frame received for 1\nI0418 00:51:40.854294 3223 log.go:172] (0xc000a783c0) (1) Data frame handling\nI0418 00:51:40.854313 3223 log.go:172] (0xc000a783c0) (1) Data frame sent\nI0418 00:51:40.854328 3223 log.go:172] (0xc000984d10) (0xc000a783c0) Stream removed, broadcasting: 1\nI0418 00:51:40.854394 3223 log.go:172] (0xc000984d10) Go away received\nI0418 00:51:40.854740 3223 log.go:172] (0xc000984d10) (0xc000a783c0) Stream removed, broadcasting: 1\nI0418 00:51:40.854762 3223 log.go:172] (0xc000984d10) (0xc000310aa0) Stream removed, broadcasting: 3\nI0418 00:51:40.854774 3223 log.go:172] (0xc000984d10) (0xc000a78000) Stream removed, broadcasting: 5\n" Apr 18 00:51:40.860: INFO: stdout: "" Apr 18 00:51:40.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9437 execpodbcgfc -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30144' Apr 18 00:51:41.104: INFO: stderr: "I0418 00:51:40.998662 3243 log.go:172] (0xc000b94b00) (0xc000b4d9a0) Create stream\nI0418 00:51:40.998726 3243 log.go:172] (0xc000b94b00) (0xc000b4d9a0) Stream added, broadcasting: 1\nI0418 00:51:41.001081 3243 log.go:172] (0xc000b94b00) Reply frame received for 1\nI0418 00:51:41.001278 3243 log.go:172] (0xc000b94b00) (0xc000b4da40) Create stream\nI0418 00:51:41.001302 3243 log.go:172] (0xc000b94b00) (0xc000b4da40) Stream added, broadcasting: 3\nI0418 00:51:41.002674 3243 log.go:172] (0xc000b94b00) Reply frame received for 3\nI0418 00:51:41.002717 3243 log.go:172] (0xc000b94b00) (0xc000b480a0) Create stream\nI0418 00:51:41.002730 3243 log.go:172] (0xc000b94b00) (0xc000b480a0) Stream added, broadcasting: 5\nI0418 00:51:41.003889 3243 log.go:172] (0xc000b94b00) Reply frame received for 5\nI0418 00:51:41.097288 3243 log.go:172] (0xc000b94b00) Data frame received for 5\nI0418 00:51:41.097340 3243 log.go:172] (0xc000b480a0) (5) Data frame handling\nI0418 00:51:41.097361 3243 log.go:172] (0xc000b480a0) (5) Data frame sent\nI0418 00:51:41.097376 3243 log.go:172] (0xc000b94b00) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.13 30144\nConnection to 172.17.0.13 30144 port [tcp/30144] succeeded!\nI0418 00:51:41.097389 3243 log.go:172] (0xc000b480a0) (5) Data frame handling\nI0418 00:51:41.097434 3243 log.go:172] (0xc000b94b00) Data frame received for 3\nI0418 00:51:41.097459 3243 log.go:172] (0xc000b4da40) (3) Data frame handling\nI0418 00:51:41.099267 3243 log.go:172] (0xc000b94b00) Data frame received for 1\nI0418 00:51:41.099299 3243 log.go:172] (0xc000b4d9a0) (1) Data frame handling\nI0418 00:51:41.099330 3243 log.go:172] (0xc000b4d9a0) (1) Data frame sent\nI0418 00:51:41.099350 3243 log.go:172] (0xc000b94b00) (0xc000b4d9a0) Stream removed, broadcasting: 1\nI0418 00:51:41.099497 3243 log.go:172] (0xc000b94b00) Go away received\nI0418 00:51:41.099818 3243 log.go:172] (0xc000b94b00) (0xc000b4d9a0) Stream removed, broadcasting: 1\nI0418 00:51:41.099850 3243 log.go:172] (0xc000b94b00) (0xc000b4da40) Stream removed, broadcasting: 3\nI0418 00:51:41.099879 3243 log.go:172] (0xc000b94b00) (0xc000b480a0) Stream removed, broadcasting: 5\n" Apr 18 00:51:41.105: INFO: stdout: "" Apr 18 00:51:41.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9437 execpodbcgfc -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30144' Apr 18 00:51:41.300: INFO: stderr: "I0418 00:51:41.234939 3262 log.go:172] (0xc0003c8000) (0xc000406b40) Create stream\nI0418 00:51:41.235022 3262 log.go:172] (0xc0003c8000) (0xc000406b40) Stream added, broadcasting: 1\nI0418 00:51:41.237528 3262 log.go:172] (0xc0003c8000) Reply frame received for 1\nI0418 00:51:41.237559 3262 log.go:172] (0xc0003c8000) (0xc000942000) Create stream\nI0418 00:51:41.237568 3262 log.go:172] (0xc0003c8000) (0xc000942000) Stream added, broadcasting: 3\nI0418 00:51:41.238633 3262 log.go:172] (0xc0003c8000) Reply frame received for 3\nI0418 00:51:41.238676 3262 log.go:172] (0xc0003c8000) (0xc0009420a0) Create stream\nI0418 00:51:41.238698 3262 log.go:172] (0xc0003c8000) (0xc0009420a0) Stream added, broadcasting: 5\nI0418 00:51:41.239714 3262 log.go:172] (0xc0003c8000) Reply frame received for 5\nI0418 00:51:41.294000 3262 log.go:172] (0xc0003c8000) Data frame received for 3\nI0418 00:51:41.294055 3262 log.go:172] (0xc000942000) (3) Data frame handling\nI0418 00:51:41.294264 3262 log.go:172] (0xc0003c8000) Data frame received for 5\nI0418 00:51:41.294289 3262 log.go:172] (0xc0009420a0) (5) Data frame handling\nI0418 00:51:41.294307 3262 log.go:172] (0xc0009420a0) (5) Data frame sent\nI0418 00:51:41.294318 3262 log.go:172] (0xc0003c8000) Data frame received for 5\nI0418 00:51:41.294327 3262 log.go:172] (0xc0009420a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30144\nConnection to 172.17.0.12 30144 port [tcp/30144] succeeded!\nI0418 00:51:41.295682 3262 log.go:172] (0xc0003c8000) Data frame received for 1\nI0418 00:51:41.295703 3262 log.go:172] (0xc000406b40) (1) Data frame handling\nI0418 00:51:41.295727 3262 log.go:172] (0xc000406b40) (1) Data frame sent\nI0418 00:51:41.296034 3262 log.go:172] (0xc0003c8000) (0xc000406b40) Stream removed, broadcasting: 1\nI0418 00:51:41.296117 3262 log.go:172] (0xc0003c8000) Go away received\nI0418 00:51:41.296330 3262 log.go:172] (0xc0003c8000) (0xc000406b40) Stream removed, broadcasting: 1\nI0418 00:51:41.296342 3262 log.go:172] (0xc0003c8000) (0xc000942000) Stream removed, broadcasting: 3\nI0418 00:51:41.296349 3262 log.go:172] (0xc0003c8000) (0xc0009420a0) Stream removed, broadcasting: 5\n" Apr 18 00:51:41.300: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:51:41.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9437" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.112 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":265,"skipped":4518,"failed":0} [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:51:41.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-e1f18e05-0aa1-47ce-88fc-a01513aa8fdc STEP: Creating secret with name s-test-opt-upd-08060f77-1f80-4ef2-b9a8-5a458549c059 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e1f18e05-0aa1-47ce-88fc-a01513aa8fdc STEP: Updating secret s-test-opt-upd-08060f77-1f80-4ef2-b9a8-5a458549c059 STEP: Creating secret with name s-test-opt-create-c34b0586-5b71-4a41-8fcb-062160f4cf6f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:53:07.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1709" for this suite. • [SLOW TEST:86.577 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4518,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:53:07.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-8f85d6cd-6098-446f-ad13-6b1acf9c2035 STEP: Creating a pod to test consume secrets Apr 18 00:53:07.970: INFO: Waiting up to 5m0s for pod "pod-secrets-258611c6-de10-4ffe-b25f-d247cfef6312" in namespace "secrets-825" to be "Succeeded or Failed" Apr 18 00:53:07.985: INFO: Pod "pod-secrets-258611c6-de10-4ffe-b25f-d247cfef6312": Phase="Pending", Reason="", readiness=false. Elapsed: 14.885316ms Apr 18 00:53:09.999: INFO: Pod "pod-secrets-258611c6-de10-4ffe-b25f-d247cfef6312": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028403053s Apr 18 00:53:12.004: INFO: Pod "pod-secrets-258611c6-de10-4ffe-b25f-d247cfef6312": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034043245s STEP: Saw pod success Apr 18 00:53:12.004: INFO: Pod "pod-secrets-258611c6-de10-4ffe-b25f-d247cfef6312" satisfied condition "Succeeded or Failed" Apr 18 00:53:12.008: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-258611c6-de10-4ffe-b25f-d247cfef6312 container secret-volume-test: STEP: delete the pod Apr 18 00:53:12.510: INFO: Waiting for pod pod-secrets-258611c6-de10-4ffe-b25f-d247cfef6312 to disappear Apr 18 00:53:13.035: INFO: Pod pod-secrets-258611c6-de10-4ffe-b25f-d247cfef6312 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:53:13.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-825" for this suite. • [SLOW TEST:5.171 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:53:13.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-12e7682a-3cfd-48bc-980a-5ee41822ec6a [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:53:13.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6095" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":268,"skipped":4570,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:53:13.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-9a003e09-1be3-4528-913e-09fcf2ce354d STEP: Creating a pod to test consume secrets Apr 18 00:53:13.751: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4c2432f7-bf96-404f-8bb8-845b5aa5ebd3" in namespace "projected-2901" to be "Succeeded or Failed" Apr 18 00:53:13.921: INFO: Pod "pod-projected-secrets-4c2432f7-bf96-404f-8bb8-845b5aa5ebd3": Phase="Pending", Reason="", readiness=false. Elapsed: 170.342976ms Apr 18 00:53:15.956: INFO: Pod "pod-projected-secrets-4c2432f7-bf96-404f-8bb8-845b5aa5ebd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205298095s Apr 18 00:53:17.960: INFO: Pod "pod-projected-secrets-4c2432f7-bf96-404f-8bb8-845b5aa5ebd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.209165988s STEP: Saw pod success Apr 18 00:53:17.960: INFO: Pod "pod-projected-secrets-4c2432f7-bf96-404f-8bb8-845b5aa5ebd3" satisfied condition "Succeeded or Failed" Apr 18 00:53:17.964: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-4c2432f7-bf96-404f-8bb8-845b5aa5ebd3 container projected-secret-volume-test: STEP: delete the pod Apr 18 00:53:18.024: INFO: Waiting for pod pod-projected-secrets-4c2432f7-bf96-404f-8bb8-845b5aa5ebd3 to disappear Apr 18 00:53:18.032: INFO: Pod pod-projected-secrets-4c2432f7-bf96-404f-8bb8-845b5aa5ebd3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:53:18.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2901" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4576,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:53:18.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 18 00:53:24.176: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4334 PodName:pod-sharedvolume-960e69d0-3c94-43d4-abb0-5820c1d5afff ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 18 00:53:24.176: INFO: >>> kubeConfig: /root/.kube/config I0418 00:53:24.211154 7 log.go:172] (0xc002f0c580) (0xc0014155e0) Create stream I0418 00:53:24.211192 7 log.go:172] (0xc002f0c580) (0xc0014155e0) Stream added, broadcasting: 1 I0418 00:53:24.214043 7 log.go:172] (0xc002f0c580) Reply frame received for 1 I0418 00:53:24.214095 7 log.go:172] (0xc002f0c580) (0xc001874960) Create stream I0418 00:53:24.214118 7 log.go:172] (0xc002f0c580) (0xc001874960) Stream added, broadcasting: 3 I0418 00:53:24.215390 7 log.go:172] (0xc002f0c580) Reply frame received for 3 I0418 00:53:24.215454 7 log.go:172] (0xc002f0c580) (0xc002c28640) Create stream I0418 00:53:24.215479 7 log.go:172] (0xc002f0c580) (0xc002c28640) Stream added, broadcasting: 5 I0418 00:53:24.216911 7 log.go:172] (0xc002f0c580) Reply frame received for 5 I0418 00:53:24.296713 7 log.go:172] (0xc002f0c580) Data frame received for 5 I0418 00:53:24.296736 7 log.go:172] (0xc002c28640) (5) Data frame handling I0418 00:53:24.296788 7 log.go:172] (0xc002f0c580) Data frame received for 3 I0418 00:53:24.296904 7 log.go:172] (0xc001874960) (3) Data frame handling I0418 00:53:24.296952 7 log.go:172] (0xc001874960) (3) Data frame sent I0418 00:53:24.296971 7 log.go:172] (0xc002f0c580) Data frame received for 3 I0418 00:53:24.297000 7 log.go:172] (0xc001874960) (3) Data frame handling I0418 00:53:24.299226 7 log.go:172] (0xc002f0c580) Data frame received for 1 I0418 00:53:24.299264 7 log.go:172] (0xc0014155e0) (1) Data frame handling I0418 00:53:24.299293 7 log.go:172] (0xc0014155e0) (1) Data frame sent I0418 00:53:24.299313 7 log.go:172] (0xc002f0c580) (0xc0014155e0) Stream removed, broadcasting: 1 I0418 00:53:24.299344 7 log.go:172] (0xc002f0c580) Go away received I0418 00:53:24.299415 7 log.go:172] (0xc002f0c580) (0xc0014155e0) Stream removed, broadcasting: 1 I0418 00:53:24.299434 7 log.go:172] (0xc002f0c580) (0xc001874960) Stream removed, broadcasting: 3 I0418 00:53:24.299449 7 log.go:172] (0xc002f0c580) (0xc002c28640) Stream removed, broadcasting: 5 Apr 18 00:53:24.299: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:53:24.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4334" for this suite. • [SLOW TEST:6.268 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":270,"skipped":4592,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:53:24.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:53:40.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1338" for this suite. • [SLOW TEST:16.139 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":271,"skipped":4611,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:53:40.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:53:56.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3279" for this suite. • [SLOW TEST:16.258 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":272,"skipped":4618,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:53:56.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 18 00:53:56.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 18 00:53:56.958: INFO: stderr: "" Apr 18 00:53:56.958: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:53:56.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3850" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":273,"skipped":4620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:53:56.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 18 00:53:57.639: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 18 00:53:59.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722768037, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722768037, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722768037, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722768037, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 18 00:54:02.688: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:54:02.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5752" for this suite. STEP: Destroying namespace "webhook-5752-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.816 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":274,"skipped":4684,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 18 00:54:02.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-a5f8ceac-8849-4649-96d7-109a09ff4290 STEP: Creating a pod to test consume secrets Apr 18 00:54:03.286: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71e862e9-851b-4da2-9437-1ed6645d7303" in namespace "projected-4715" to be "Succeeded or Failed" Apr 18 00:54:03.682: INFO: Pod "pod-projected-secrets-71e862e9-851b-4da2-9437-1ed6645d7303": Phase="Pending", Reason="", readiness=false. Elapsed: 396.12029ms Apr 18 00:54:05.686: INFO: Pod "pod-projected-secrets-71e862e9-851b-4da2-9437-1ed6645d7303": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400196485s Apr 18 00:54:07.690: INFO: Pod "pod-projected-secrets-71e862e9-851b-4da2-9437-1ed6645d7303": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.403896883s STEP: Saw pod success Apr 18 00:54:07.690: INFO: Pod "pod-projected-secrets-71e862e9-851b-4da2-9437-1ed6645d7303" satisfied condition "Succeeded or Failed" Apr 18 00:54:07.693: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-71e862e9-851b-4da2-9437-1ed6645d7303 container projected-secret-volume-test: STEP: delete the pod Apr 18 00:54:07.846: INFO: Waiting for pod pod-projected-secrets-71e862e9-851b-4da2-9437-1ed6645d7303 to disappear Apr 18 00:54:07.892: INFO: Pod pod-projected-secrets-71e862e9-851b-4da2-9437-1ed6645d7303 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 18 00:54:07.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4715" for this suite. • [SLOW TEST:5.128 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4706,"failed":0} SSSSSSSSSSSApr 18 00:54:07.914: INFO: Running AfterSuite actions on all nodes Apr 18 00:54:07.914: INFO: Running AfterSuite actions on node 1 Apr 18 00:54:07.914: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4642.409 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS