I0410 12:55:39.709370 6 e2e.go:243] Starting e2e run "2a63e938-9bd4-4a0c-926d-2e1d446ffcd6" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586523338 - Will randomize all specs Will run 215 of 4412 specs Apr 10 12:55:39.900: INFO: >>> kubeConfig: /root/.kube/config Apr 10 12:55:39.904: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 10 12:55:39.927: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 10 12:55:39.958: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 10 12:55:39.958: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 10 12:55:39.958: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 10 12:55:39.969: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 10 12:55:39.969: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 10 12:55:39.969: INFO: e2e test version: v1.15.11 Apr 10 12:55:39.970: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:55:39.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath Apr 10 12:55:40.020: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-xzfv STEP: Creating a pod to test atomic-volume-subpath Apr 10 12:55:40.049: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xzfv" in namespace "subpath-3991" to be "success or failure" Apr 10 12:55:40.059: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Pending", Reason="", readiness=false. Elapsed: 9.314528ms Apr 10 12:55:42.063: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013552127s Apr 10 12:55:44.066: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Running", Reason="", readiness=true. Elapsed: 4.017031392s Apr 10 12:55:46.070: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Running", Reason="", readiness=true. Elapsed: 6.02068091s Apr 10 12:55:48.074: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Running", Reason="", readiness=true. Elapsed: 8.024532581s Apr 10 12:55:50.078: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Running", Reason="", readiness=true. Elapsed: 10.028925546s Apr 10 12:55:52.082: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Running", Reason="", readiness=true. Elapsed: 12.032858075s Apr 10 12:55:54.085: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Running", Reason="", readiness=true. Elapsed: 14.036085282s Apr 10 12:55:56.089: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Running", Reason="", readiness=true. Elapsed: 16.039652955s Apr 10 12:55:58.093: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Running", Reason="", readiness=true. Elapsed: 18.043660105s Apr 10 12:56:00.096: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Running", Reason="", readiness=true. Elapsed: 20.047124401s Apr 10 12:56:02.101: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Running", Reason="", readiness=true. Elapsed: 22.051259711s Apr 10 12:56:04.108: INFO: Pod "pod-subpath-test-downwardapi-xzfv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.059032342s STEP: Saw pod success Apr 10 12:56:04.108: INFO: Pod "pod-subpath-test-downwardapi-xzfv" satisfied condition "success or failure" Apr 10 12:56:04.111: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-xzfv container test-container-subpath-downwardapi-xzfv: STEP: delete the pod Apr 10 12:56:04.139: INFO: Waiting for pod pod-subpath-test-downwardapi-xzfv to disappear Apr 10 12:56:04.172: INFO: Pod pod-subpath-test-downwardapi-xzfv no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-xzfv Apr 10 12:56:04.172: INFO: Deleting pod "pod-subpath-test-downwardapi-xzfv" in namespace "subpath-3991" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:56:04.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3991" for this suite. Apr 10 12:56:10.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:56:10.315: INFO: namespace subpath-3991 deletion completed in 6.101808649s • [SLOW TEST:30.344 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:56:10.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 10 12:56:10.400: INFO: Waiting up to 5m0s for pod "pod-0008290b-e5e7-496b-8d09-870b8c1bedac" in namespace "emptydir-2425" to be "success or failure" Apr 10 12:56:10.407: INFO: Pod "pod-0008290b-e5e7-496b-8d09-870b8c1bedac": Phase="Pending", Reason="", readiness=false. Elapsed: 7.136628ms Apr 10 12:56:12.458: INFO: Pod "pod-0008290b-e5e7-496b-8d09-870b8c1bedac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058112798s Apr 10 12:56:14.462: INFO: Pod "pod-0008290b-e5e7-496b-8d09-870b8c1bedac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062507341s STEP: Saw pod success Apr 10 12:56:14.462: INFO: Pod "pod-0008290b-e5e7-496b-8d09-870b8c1bedac" satisfied condition "success or failure" Apr 10 12:56:14.465: INFO: Trying to get logs from node iruya-worker pod pod-0008290b-e5e7-496b-8d09-870b8c1bedac container test-container: STEP: delete the pod Apr 10 12:56:14.548: INFO: Waiting for pod pod-0008290b-e5e7-496b-8d09-870b8c1bedac to disappear Apr 10 12:56:14.551: INFO: Pod pod-0008290b-e5e7-496b-8d09-870b8c1bedac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:56:14.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2425" for this suite. Apr 10 12:56:20.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:56:20.665: INFO: namespace emptydir-2425 deletion completed in 6.110967408s • [SLOW TEST:10.350 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:56:20.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 10 12:56:23.766: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:56:23.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5542" for this suite. Apr 10 12:56:29.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:56:29.936: INFO: namespace container-runtime-5542 deletion completed in 6.085803224s • [SLOW TEST:9.271 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:56:29.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:56:34.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9087" for this suite. Apr 10 12:56:40.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:56:40.182: INFO: namespace emptydir-wrapper-9087 deletion completed in 6.094248963s • [SLOW TEST:10.244 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:56:40.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:57:14.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9701" for this suite. Apr 10 12:57:20.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:57:20.119: INFO: namespace container-runtime-9701 deletion completed in 6.089119865s • [SLOW TEST:39.937 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:57:20.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a9da511b-66ce-482e-927c-27f11185576b STEP: Creating a pod to test consume configMaps Apr 10 12:57:20.183: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-75e46618-3ac3-493e-81dd-e0bc96e48750" in namespace "projected-3911" to be "success or failure" Apr 10 12:57:20.237: INFO: Pod "pod-projected-configmaps-75e46618-3ac3-493e-81dd-e0bc96e48750": Phase="Pending", Reason="", readiness=false. Elapsed: 54.446157ms Apr 10 12:57:22.241: INFO: Pod "pod-projected-configmaps-75e46618-3ac3-493e-81dd-e0bc96e48750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058561456s Apr 10 12:57:24.245: INFO: Pod "pod-projected-configmaps-75e46618-3ac3-493e-81dd-e0bc96e48750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062345841s STEP: Saw pod success Apr 10 12:57:24.245: INFO: Pod "pod-projected-configmaps-75e46618-3ac3-493e-81dd-e0bc96e48750" satisfied condition "success or failure" Apr 10 12:57:24.248: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-75e46618-3ac3-493e-81dd-e0bc96e48750 container projected-configmap-volume-test: STEP: delete the pod Apr 10 12:57:24.279: INFO: Waiting for pod pod-projected-configmaps-75e46618-3ac3-493e-81dd-e0bc96e48750 to disappear Apr 10 12:57:24.282: INFO: Pod pod-projected-configmaps-75e46618-3ac3-493e-81dd-e0bc96e48750 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:57:24.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3911" for this suite. Apr 10 12:57:30.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:57:30.391: INFO: namespace projected-3911 deletion completed in 6.106393484s • [SLOW TEST:10.272 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:57:30.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8c4a3963-3ef3-4f1a-943d-2002c001e5a6 STEP: Creating a pod to test consume secrets Apr 10 12:57:30.632: INFO: Waiting up to 5m0s for pod "pod-secrets-3574c18b-df29-41fa-9021-60c4a5631579" in namespace "secrets-2888" to be "success or failure" Apr 10 12:57:30.648: INFO: Pod "pod-secrets-3574c18b-df29-41fa-9021-60c4a5631579": Phase="Pending", Reason="", readiness=false. Elapsed: 16.204306ms Apr 10 12:57:32.668: INFO: Pod "pod-secrets-3574c18b-df29-41fa-9021-60c4a5631579": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036776011s Apr 10 12:57:34.673: INFO: Pod "pod-secrets-3574c18b-df29-41fa-9021-60c4a5631579": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041129112s STEP: Saw pod success Apr 10 12:57:34.673: INFO: Pod "pod-secrets-3574c18b-df29-41fa-9021-60c4a5631579" satisfied condition "success or failure" Apr 10 12:57:34.675: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-3574c18b-df29-41fa-9021-60c4a5631579 container secret-volume-test: STEP: delete the pod Apr 10 12:57:34.716: INFO: Waiting for pod pod-secrets-3574c18b-df29-41fa-9021-60c4a5631579 to disappear Apr 10 12:57:34.731: INFO: Pod pod-secrets-3574c18b-df29-41fa-9021-60c4a5631579 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:57:34.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2888" for this suite. Apr 10 12:57:40.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:57:40.895: INFO: namespace secrets-2888 deletion completed in 6.16154238s STEP: Destroying namespace "secret-namespace-312" for this suite. Apr 10 12:57:46.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:57:46.978: INFO: namespace secret-namespace-312 deletion completed in 6.082966898s • [SLOW TEST:16.587 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:57:46.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 12:57:47.076: INFO: Create a RollingUpdate DaemonSet Apr 10 12:57:47.080: INFO: Check that daemon pods launch on every node of the cluster Apr 10 12:57:47.087: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 12:57:47.100: INFO: Number of nodes with available pods: 0 Apr 10 12:57:47.100: INFO: Node iruya-worker is running more than one daemon pod Apr 10 12:57:48.106: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 12:57:48.109: INFO: Number of nodes with available pods: 0 Apr 10 12:57:48.109: INFO: Node iruya-worker is running more than one daemon pod Apr 10 12:57:49.104: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 12:57:49.107: INFO: Number of nodes with available pods: 0 Apr 10 12:57:49.107: INFO: Node iruya-worker is running more than one daemon pod Apr 10 12:57:50.125: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 12:57:50.128: INFO: Number of nodes with available pods: 0 Apr 10 12:57:50.128: INFO: Node iruya-worker is running more than one daemon pod Apr 10 12:57:51.106: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 12:57:51.109: INFO: Number of nodes with available pods: 2 Apr 10 12:57:51.109: INFO: Number of running nodes: 2, number of available pods: 2 Apr 10 12:57:51.109: INFO: Update the DaemonSet to trigger a rollout Apr 10 12:57:51.116: INFO: Updating DaemonSet daemon-set Apr 10 12:58:02.136: INFO: Roll back the DaemonSet before rollout is complete Apr 10 12:58:02.142: INFO: Updating DaemonSet daemon-set Apr 10 12:58:02.142: INFO: Make sure DaemonSet rollback is complete Apr 10 12:58:02.151: INFO: Wrong image for pod: daemon-set-9sf28. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 10 12:58:02.151: INFO: Pod daemon-set-9sf28 is not available Apr 10 12:58:02.196: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 12:58:03.200: INFO: Wrong image for pod: daemon-set-9sf28. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 10 12:58:03.200: INFO: Pod daemon-set-9sf28 is not available Apr 10 12:58:03.204: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 12:58:04.201: INFO: Wrong image for pod: daemon-set-9sf28. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 10 12:58:04.201: INFO: Pod daemon-set-9sf28 is not available Apr 10 12:58:04.204: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 12:58:05.200: INFO: Pod daemon-set-r5sgb is not available Apr 10 12:58:05.203: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7904, will wait for the garbage collector to delete the pods Apr 10 12:58:05.268: INFO: Deleting DaemonSet.extensions daemon-set took: 6.410842ms Apr 10 12:58:05.568: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.250373ms Apr 10 12:58:12.177: INFO: Number of nodes with available pods: 0 Apr 10 12:58:12.177: INFO: Number of running nodes: 0, number of available pods: 0 Apr 10 12:58:12.183: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7904/daemonsets","resourceVersion":"4658849"},"items":null} Apr 10 12:58:12.185: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7904/pods","resourceVersion":"4658849"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:58:12.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7904" for this suite. Apr 10 12:58:18.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:58:18.311: INFO: namespace daemonsets-7904 deletion completed in 6.113083662s • [SLOW TEST:31.333 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:58:18.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 10 12:58:18.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7490 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 10 12:58:24.010: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0410 12:58:23.932437 39 log.go:172] (0xc000119290) (0xc00065a6e0) Create stream\nI0410 12:58:23.932510 39 log.go:172] (0xc000119290) (0xc00065a6e0) Stream added, broadcasting: 1\nI0410 12:58:23.938373 39 log.go:172] (0xc000119290) Reply frame received for 1\nI0410 12:58:23.938423 39 log.go:172] (0xc000119290) (0xc00055dae0) Create stream\nI0410 12:58:23.938434 39 log.go:172] (0xc000119290) (0xc00055dae0) Stream added, broadcasting: 3\nI0410 12:58:23.939173 39 log.go:172] (0xc000119290) Reply frame received for 3\nI0410 12:58:23.939209 39 log.go:172] (0xc000119290) (0xc00065a280) Create stream\nI0410 12:58:23.939225 39 log.go:172] (0xc000119290) (0xc00065a280) Stream added, broadcasting: 5\nI0410 12:58:23.939874 39 log.go:172] (0xc000119290) Reply frame received for 5\nI0410 12:58:23.939904 39 log.go:172] (0xc000119290) (0xc000184000) Create stream\nI0410 12:58:23.939922 39 log.go:172] (0xc000119290) (0xc000184000) Stream added, broadcasting: 7\nI0410 12:58:23.940500 39 log.go:172] (0xc000119290) Reply frame received for 7\nI0410 12:58:23.940588 39 log.go:172] (0xc00055dae0) (3) Writing data frame\nI0410 12:58:23.940672 39 log.go:172] (0xc00055dae0) (3) Writing data frame\nI0410 12:58:23.941427 39 log.go:172] (0xc000119290) Data frame received for 5\nI0410 12:58:23.941447 39 log.go:172] (0xc00065a280) (5) Data frame handling\nI0410 12:58:23.941461 39 log.go:172] (0xc00065a280) (5) Data frame sent\nI0410 12:58:23.942056 39 log.go:172] (0xc000119290) Data frame received for 5\nI0410 12:58:23.942073 39 log.go:172] (0xc00065a280) (5) Data frame handling\nI0410 12:58:23.942083 39 log.go:172] (0xc00065a280) (5) Data frame sent\nI0410 12:58:23.979514 39 log.go:172] (0xc000119290) Data frame received for 5\nI0410 12:58:23.979569 39 log.go:172] (0xc00065a280) (5) Data frame handling\nI0410 12:58:23.979606 39 log.go:172] (0xc000119290) Data frame received for 7\nI0410 12:58:23.979630 39 log.go:172] (0xc000184000) (7) Data frame handling\nI0410 12:58:23.980050 39 log.go:172] (0xc000119290) Data frame received for 1\nI0410 12:58:23.980082 39 log.go:172] (0xc00065a6e0) (1) Data frame handling\nI0410 12:58:23.980100 39 log.go:172] (0xc00065a6e0) (1) Data frame sent\nI0410 12:58:23.980123 39 log.go:172] (0xc000119290) (0xc00055dae0) Stream removed, broadcasting: 3\nI0410 12:58:23.980173 39 log.go:172] (0xc000119290) (0xc00065a6e0) Stream removed, broadcasting: 1\nI0410 12:58:23.980191 39 log.go:172] (0xc000119290) Go away received\nI0410 12:58:23.980434 39 log.go:172] (0xc000119290) (0xc00065a6e0) Stream removed, broadcasting: 1\nI0410 12:58:23.980468 39 log.go:172] (0xc000119290) (0xc00055dae0) Stream removed, broadcasting: 3\nI0410 12:58:23.980486 39 log.go:172] (0xc000119290) (0xc00065a280) Stream removed, broadcasting: 5\nI0410 12:58:23.980505 39 log.go:172] (0xc000119290) (0xc000184000) Stream removed, broadcasting: 7\n" Apr 10 12:58:24.010: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:58:26.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7490" for this suite. Apr 10 12:58:34.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:58:34.113: INFO: namespace kubectl-7490 deletion completed in 8.092963812s • [SLOW TEST:15.802 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:58:34.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 10 12:58:37.215: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:58:37.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7323" for this suite. Apr 10 12:58:43.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:58:43.346: INFO: namespace container-runtime-7323 deletion completed in 6.114652305s • [SLOW TEST:9.233 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:58:43.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 10 12:58:43.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 10 12:58:43.515: INFO: stderr: "" Apr 10 12:58:43.515: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:58:43.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2817" for this suite. Apr 10 12:58:49.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:58:49.634: INFO: namespace kubectl-2817 deletion completed in 6.095237597s • [SLOW TEST:6.287 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:58:49.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 10 12:58:49.693: INFO: Waiting up to 5m0s for pod "downward-api-fc2febb7-68ba-479d-913c-772a6a056fee" in namespace "downward-api-5822" to be "success or failure" Apr 10 12:58:49.697: INFO: Pod "downward-api-fc2febb7-68ba-479d-913c-772a6a056fee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.922748ms Apr 10 12:58:51.702: INFO: Pod "downward-api-fc2febb7-68ba-479d-913c-772a6a056fee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008410646s Apr 10 12:58:53.707: INFO: Pod "downward-api-fc2febb7-68ba-479d-913c-772a6a056fee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013644789s STEP: Saw pod success Apr 10 12:58:53.707: INFO: Pod "downward-api-fc2febb7-68ba-479d-913c-772a6a056fee" satisfied condition "success or failure" Apr 10 12:58:53.710: INFO: Trying to get logs from node iruya-worker pod downward-api-fc2febb7-68ba-479d-913c-772a6a056fee container dapi-container: STEP: delete the pod Apr 10 12:58:53.728: INFO: Waiting for pod downward-api-fc2febb7-68ba-479d-913c-772a6a056fee to disappear Apr 10 12:58:53.732: INFO: Pod downward-api-fc2febb7-68ba-479d-913c-772a6a056fee no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:58:53.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5822" for this suite. Apr 10 12:58:59.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:58:59.825: INFO: namespace downward-api-5822 deletion completed in 6.089676251s • [SLOW TEST:10.191 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:58:59.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 10 12:58:59.905: INFO: Waiting up to 5m0s for pod "pod-9117e4b5-f243-4d18-bacd-4c629fa71686" in namespace "emptydir-5874" to be "success or failure" Apr 10 12:58:59.908: INFO: Pod "pod-9117e4b5-f243-4d18-bacd-4c629fa71686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.438976ms Apr 10 12:59:01.912: INFO: Pod "pod-9117e4b5-f243-4d18-bacd-4c629fa71686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006679931s Apr 10 12:59:03.916: INFO: Pod "pod-9117e4b5-f243-4d18-bacd-4c629fa71686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01071902s STEP: Saw pod success Apr 10 12:59:03.916: INFO: Pod "pod-9117e4b5-f243-4d18-bacd-4c629fa71686" satisfied condition "success or failure" Apr 10 12:59:03.919: INFO: Trying to get logs from node iruya-worker2 pod pod-9117e4b5-f243-4d18-bacd-4c629fa71686 container test-container: STEP: delete the pod Apr 10 12:59:03.949: INFO: Waiting for pod pod-9117e4b5-f243-4d18-bacd-4c629fa71686 to disappear Apr 10 12:59:03.980: INFO: Pod pod-9117e4b5-f243-4d18-bacd-4c629fa71686 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:59:03.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5874" for this suite. Apr 10 12:59:10.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:59:10.084: INFO: namespace emptydir-5874 deletion completed in 6.100179363s • [SLOW TEST:10.259 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:59:10.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 10 12:59:10.150: INFO: Waiting up to 5m0s for pod "pod-c8e21e9e-6883-493d-a9dc-e13c3c6edf98" in namespace "emptydir-2273" to be "success or failure" Apr 10 12:59:10.154: INFO: Pod "pod-c8e21e9e-6883-493d-a9dc-e13c3c6edf98": Phase="Pending", Reason="", readiness=false. Elapsed: 3.43537ms Apr 10 12:59:12.157: INFO: Pod "pod-c8e21e9e-6883-493d-a9dc-e13c3c6edf98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006957164s Apr 10 12:59:14.160: INFO: Pod "pod-c8e21e9e-6883-493d-a9dc-e13c3c6edf98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010106425s STEP: Saw pod success Apr 10 12:59:14.160: INFO: Pod "pod-c8e21e9e-6883-493d-a9dc-e13c3c6edf98" satisfied condition "success or failure" Apr 10 12:59:14.163: INFO: Trying to get logs from node iruya-worker pod pod-c8e21e9e-6883-493d-a9dc-e13c3c6edf98 container test-container: STEP: delete the pod Apr 10 12:59:14.183: INFO: Waiting for pod pod-c8e21e9e-6883-493d-a9dc-e13c3c6edf98 to disappear Apr 10 12:59:14.187: INFO: Pod pod-c8e21e9e-6883-493d-a9dc-e13c3c6edf98 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:59:14.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2273" for this suite. Apr 10 12:59:20.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:59:20.285: INFO: namespace emptydir-2273 deletion completed in 6.094784843s • [SLOW TEST:10.200 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:59:20.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 10 12:59:23.406: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:59:23.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5267" for this suite. Apr 10 12:59:29.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 12:59:29.519: INFO: namespace container-runtime-5267 deletion completed in 6.093392895s • [SLOW TEST:9.233 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 12:59:29.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 12:59:34.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9678" for this suite. Apr 10 13:00:04.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:00:04.746: INFO: namespace replication-controller-9678 deletion completed in 30.085122588s • [SLOW TEST:35.227 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:00:04.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-34749cb5-7e17-4c19-bddd-66cf1491ce34 STEP: Creating secret with name s-test-opt-upd-9862ec4d-9593-4738-9258-97d21c4a402b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-34749cb5-7e17-4c19-bddd-66cf1491ce34 STEP: Updating secret s-test-opt-upd-9862ec4d-9593-4738-9258-97d21c4a402b STEP: Creating secret with name s-test-opt-create-57a81171-14ff-44a4-bf1b-0bac283d2d36 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:00:12.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1657" for this suite. Apr 10 13:00:34.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:00:35.048: INFO: namespace secrets-1657 deletion completed in 22.089453086s • [SLOW TEST:30.302 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:00:35.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 10 13:00:35.110: INFO: Waiting up to 5m0s for pod "downward-api-b365a84b-bd40-40ba-a7b0-8f1f34e98dba" in namespace "downward-api-6136" to be "success or failure" Apr 10 13:00:35.124: INFO: Pod "downward-api-b365a84b-bd40-40ba-a7b0-8f1f34e98dba": Phase="Pending", Reason="", readiness=false. Elapsed: 13.784997ms Apr 10 13:00:37.128: INFO: Pod "downward-api-b365a84b-bd40-40ba-a7b0-8f1f34e98dba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017761053s Apr 10 13:00:39.132: INFO: Pod "downward-api-b365a84b-bd40-40ba-a7b0-8f1f34e98dba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022049217s STEP: Saw pod success Apr 10 13:00:39.132: INFO: Pod "downward-api-b365a84b-bd40-40ba-a7b0-8f1f34e98dba" satisfied condition "success or failure" Apr 10 13:00:39.135: INFO: Trying to get logs from node iruya-worker2 pod downward-api-b365a84b-bd40-40ba-a7b0-8f1f34e98dba container dapi-container: STEP: delete the pod Apr 10 13:00:39.151: INFO: Waiting for pod downward-api-b365a84b-bd40-40ba-a7b0-8f1f34e98dba to disappear Apr 10 13:00:39.155: INFO: Pod downward-api-b365a84b-bd40-40ba-a7b0-8f1f34e98dba no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:00:39.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6136" for this suite. Apr 10 13:00:45.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:00:45.249: INFO: namespace downward-api-6136 deletion completed in 6.090912083s • [SLOW TEST:10.200 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:00:45.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 13:00:45.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69eeb7ec-e564-4e33-85fa-b71613cf25d4" in namespace "downward-api-1988" to be "success or failure" Apr 10 13:00:45.385: INFO: Pod "downwardapi-volume-69eeb7ec-e564-4e33-85fa-b71613cf25d4": Phase="Pending", Reason="", readiness=false. Elapsed: 39.712819ms Apr 10 13:00:47.388: INFO: Pod "downwardapi-volume-69eeb7ec-e564-4e33-85fa-b71613cf25d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043192248s Apr 10 13:00:49.393: INFO: Pod "downwardapi-volume-69eeb7ec-e564-4e33-85fa-b71613cf25d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047625174s STEP: Saw pod success Apr 10 13:00:49.393: INFO: Pod "downwardapi-volume-69eeb7ec-e564-4e33-85fa-b71613cf25d4" satisfied condition "success or failure" Apr 10 13:00:49.396: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-69eeb7ec-e564-4e33-85fa-b71613cf25d4 container client-container: STEP: delete the pod Apr 10 13:00:49.418: INFO: Waiting for pod downwardapi-volume-69eeb7ec-e564-4e33-85fa-b71613cf25d4 to disappear Apr 10 13:00:49.437: INFO: Pod downwardapi-volume-69eeb7ec-e564-4e33-85fa-b71613cf25d4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:00:49.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1988" for this suite. Apr 10 13:00:55.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:00:55.536: INFO: namespace downward-api-1988 deletion completed in 6.096107151s • [SLOW TEST:10.286 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:00:55.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-97c675b9-c04d-4e1c-bbde-aa63d255d9a7 in namespace container-probe-5851 Apr 10 13:00:59.607: INFO: Started pod busybox-97c675b9-c04d-4e1c-bbde-aa63d255d9a7 in namespace container-probe-5851 STEP: checking the pod's current state and verifying that restartCount is present Apr 10 13:00:59.610: INFO: Initial restart count of pod busybox-97c675b9-c04d-4e1c-bbde-aa63d255d9a7 is 0 Apr 10 13:01:49.724: INFO: Restart count of pod container-probe-5851/busybox-97c675b9-c04d-4e1c-bbde-aa63d255d9a7 is now 1 (50.113705622s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:01:49.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5851" for this suite. Apr 10 13:01:55.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:01:55.861: INFO: namespace container-probe-5851 deletion completed in 6.107172142s • [SLOW TEST:60.325 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:01:55.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 10 13:01:55.959: INFO: Waiting up to 5m0s for pod "client-containers-321b0261-294c-4112-b3cd-5972224b48ed" in namespace "containers-4711" to be "success or failure" Apr 10 13:01:55.978: INFO: Pod "client-containers-321b0261-294c-4112-b3cd-5972224b48ed": Phase="Pending", Reason="", readiness=false. Elapsed: 19.356986ms Apr 10 13:01:57.982: INFO: Pod "client-containers-321b0261-294c-4112-b3cd-5972224b48ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023456409s Apr 10 13:01:59.987: INFO: Pod "client-containers-321b0261-294c-4112-b3cd-5972224b48ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028043969s STEP: Saw pod success Apr 10 13:01:59.987: INFO: Pod "client-containers-321b0261-294c-4112-b3cd-5972224b48ed" satisfied condition "success or failure" Apr 10 13:01:59.990: INFO: Trying to get logs from node iruya-worker2 pod client-containers-321b0261-294c-4112-b3cd-5972224b48ed container test-container: STEP: delete the pod Apr 10 13:02:00.012: INFO: Waiting for pod client-containers-321b0261-294c-4112-b3cd-5972224b48ed to disappear Apr 10 13:02:00.016: INFO: Pod client-containers-321b0261-294c-4112-b3cd-5972224b48ed no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:02:00.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4711" for this suite. Apr 10 13:02:06.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:02:06.112: INFO: namespace containers-4711 deletion completed in 6.09214341s • [SLOW TEST:10.250 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:02:06.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 13:02:06.207: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a985705-1610-4551-a3b5-848bb3bd8aff" in namespace "projected-5658" to be "success or failure" Apr 10 13:02:06.217: INFO: Pod "downwardapi-volume-3a985705-1610-4551-a3b5-848bb3bd8aff": Phase="Pending", Reason="", readiness=false. Elapsed: 9.284432ms Apr 10 13:02:08.221: INFO: Pod "downwardapi-volume-3a985705-1610-4551-a3b5-848bb3bd8aff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013297704s Apr 10 13:02:10.226: INFO: Pod "downwardapi-volume-3a985705-1610-4551-a3b5-848bb3bd8aff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018129032s STEP: Saw pod success Apr 10 13:02:10.226: INFO: Pod "downwardapi-volume-3a985705-1610-4551-a3b5-848bb3bd8aff" satisfied condition "success or failure" Apr 10 13:02:10.229: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3a985705-1610-4551-a3b5-848bb3bd8aff container client-container: STEP: delete the pod Apr 10 13:02:10.276: INFO: Waiting for pod downwardapi-volume-3a985705-1610-4551-a3b5-848bb3bd8aff to disappear Apr 10 13:02:10.286: INFO: Pod downwardapi-volume-3a985705-1610-4551-a3b5-848bb3bd8aff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:02:10.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5658" for this suite. Apr 10 13:02:16.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:02:16.429: INFO: namespace projected-5658 deletion completed in 6.139483212s • [SLOW TEST:10.317 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:02:16.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 10 13:02:16.500: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 10 13:02:16.508: INFO: Waiting for terminating namespaces to be deleted... Apr 10 13:02:16.510: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 10 13:02:16.516: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 10 13:02:16.516: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 13:02:16.516: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 10 13:02:16.516: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 13:02:16.516: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 10 13:02:16.521: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 10 13:02:16.521: INFO: Container coredns ready: true, restart count 0 Apr 10 13:02:16.521: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 10 13:02:16.521: INFO: Container coredns ready: true, restart count 0 Apr 10 13:02:16.521: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 10 13:02:16.521: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 13:02:16.521: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 10 13:02:16.521: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 10 13:02:16.579: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 10 13:02:16.579: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 10 13:02:16.579: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 10 13:02:16.579: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 10 13:02:16.579: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 10 13:02:16.579: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-49ab1405-b2c8-4d33-accd-7628a06a3b75.160476eec343f414], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2679/filler-pod-49ab1405-b2c8-4d33-accd-7628a06a3b75 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-49ab1405-b2c8-4d33-accd-7628a06a3b75.160476ef103e1c49], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-49ab1405-b2c8-4d33-accd-7628a06a3b75.160476ef5428d8f7], Reason = [Created], Message = [Created container filler-pod-49ab1405-b2c8-4d33-accd-7628a06a3b75] STEP: Considering event: Type = [Normal], Name = [filler-pod-49ab1405-b2c8-4d33-accd-7628a06a3b75.160476ef67d5d362], Reason = [Started], Message = [Started container filler-pod-49ab1405-b2c8-4d33-accd-7628a06a3b75] STEP: Considering event: Type = [Normal], Name = [filler-pod-a1c2e562-c58b-45f3-9d96-71bb3bd194b1.160476eec5e688bf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2679/filler-pod-a1c2e562-c58b-45f3-9d96-71bb3bd194b1 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a1c2e562-c58b-45f3-9d96-71bb3bd194b1.160476ef462998af], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a1c2e562-c58b-45f3-9d96-71bb3bd194b1.160476ef7bdfaac8], Reason = [Created], Message = [Created container filler-pod-a1c2e562-c58b-45f3-9d96-71bb3bd194b1] STEP: Considering event: Type = [Normal], Name = [filler-pod-a1c2e562-c58b-45f3-9d96-71bb3bd194b1.160476ef89d5df41], Reason = [Started], Message = [Started container filler-pod-a1c2e562-c58b-45f3-9d96-71bb3bd194b1] STEP: Considering event: Type = [Warning], Name = [additional-pod.160476efb562f1e5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:02:21.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2679" for this suite. Apr 10 13:02:27.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:02:27.829: INFO: namespace sched-pred-2679 deletion completed in 6.087702154s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.400 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:02:27.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 13:02:27.971: INFO: Creating deployment "nginx-deployment" Apr 10 13:02:27.982: INFO: Waiting for observed generation 1 Apr 10 13:02:29.991: INFO: Waiting for all required pods to come up Apr 10 13:02:29.995: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 10 13:02:38.018: INFO: Waiting for deployment "nginx-deployment" to complete Apr 10 13:02:38.022: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 10 13:02:38.042: INFO: Updating deployment nginx-deployment Apr 10 13:02:38.042: INFO: Waiting for observed generation 2 Apr 10 13:02:40.052: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 10 13:02:40.055: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 10 13:02:40.058: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 10 13:02:40.066: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 10 13:02:40.066: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 10 13:02:40.068: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 10 13:02:40.072: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 10 13:02:40.072: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 10 13:02:40.078: INFO: Updating deployment nginx-deployment Apr 10 13:02:40.078: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 10 13:02:40.369: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 10 13:02:40.627: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 10 13:02:40.954: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4431,SelfLink:/apis/apps/v1/namespaces/deployment-4431/deployments/nginx-deployment,UID:32c6b8e3-66f3-40ee-ad70-8d4607c46b9f,ResourceVersion:4660009,Generation:3,CreationTimestamp:2020-04-10 13:02:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-04-10 13:02:38 +0000 UTC 2020-04-10 13:02:27 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-04-10 13:02:40 +0000 UTC 2020-04-10 13:02:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 10 13:02:41.058: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4431,SelfLink:/apis/apps/v1/namespaces/deployment-4431/replicasets/nginx-deployment-55fb7cb77f,UID:e55f9454-eead-464a-ab4f-fb5d02d96a4c,ResourceVersion:4660030,Generation:3,CreationTimestamp:2020-04-10 13:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 32c6b8e3-66f3-40ee-ad70-8d4607c46b9f 0xc002b31b87 0xc002b31b88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 10 13:02:41.058: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 10 13:02:41.058: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4431,SelfLink:/apis/apps/v1/namespaces/deployment-4431/replicasets/nginx-deployment-7b8c6f4498,UID:c21e58e9-0dad-4e6b-bdf3-8e6254384ada,ResourceVersion:4660025,Generation:3,CreationTimestamp:2020-04-10 13:02:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 32c6b8e3-66f3-40ee-ad70-8d4607c46b9f 0xc002b31c57 0xc002b31c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 10 13:02:41.206: INFO: Pod "nginx-deployment-55fb7cb77f-cv5m5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cv5m5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-cv5m5,UID:bf82d8ac-a930-47e6-994d-0dc13ce1895f,ResourceVersion:4660021,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc0029405d7 0xc0029405d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002940650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002940670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.207: INFO: Pod "nginx-deployment-55fb7cb77f-f6s2j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f6s2j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-f6s2j,UID:119b5123-72bf-46ad-907d-24fbb7ed8faa,ResourceVersion:4660034,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc0029406f7 0xc0029406f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002940770} {node.kubernetes.io/unreachable Exists NoExecute 0xc002940790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-10 13:02:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.207: INFO: Pod "nginx-deployment-55fb7cb77f-g2nmf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-g2nmf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-g2nmf,UID:7245de42-7b42-4f70-9d00-51ef762738a3,ResourceVersion:4659964,Generation:0,CreationTimestamp:2020-04-10 13:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc002940860 0xc002940861}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029408e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002940900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-10 13:02:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.207: INFO: Pod "nginx-deployment-55fb7cb77f-g6lz2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-g6lz2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-g6lz2,UID:60cfe9a5-ae65-44f7-8161-ab10bb2d6938,ResourceVersion:4659935,Generation:0,CreationTimestamp:2020-04-10 13:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc0029409d0 0xc0029409d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002940a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002940a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-10 13:02:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.207: INFO: Pod "nginx-deployment-55fb7cb77f-g9zn9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-g9zn9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-g9zn9,UID:3840f912-0f84-400c-9cfd-1c2fee398447,ResourceVersion:4660000,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc002940b40 0xc002940b41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002940bc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002940be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.207: INFO: Pod "nginx-deployment-55fb7cb77f-k2xxc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k2xxc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-k2xxc,UID:b1870cd2-4f84-4518-998f-5a5dece2db5b,ResourceVersion:4660027,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc002940c67 0xc002940c68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002940ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002940d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.207: INFO: Pod "nginx-deployment-55fb7cb77f-kcm9l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kcm9l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-kcm9l,UID:3f8bf329-e0e3-412f-a165-217dc3be6747,ResourceVersion:4659960,Generation:0,CreationTimestamp:2020-04-10 13:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc002940d87 0xc002940d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002940e00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002940e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-10 13:02:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.207: INFO: Pod "nginx-deployment-55fb7cb77f-lpsql" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lpsql,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-lpsql,UID:2ed98801-99b5-4788-a568-be3af50b509d,ResourceVersion:4660033,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc002940ef0 0xc002940ef1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002940f70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002940f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-10 13:02:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.208: INFO: Pod "nginx-deployment-55fb7cb77f-ngmh8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ngmh8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-ngmh8,UID:27e17a21-df28-4722-9008-dc7c863f8873,ResourceVersion:4660016,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc002941060 0xc002941061}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029410e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002941100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.208: INFO: Pod "nginx-deployment-55fb7cb77f-tcfrg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tcfrg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-tcfrg,UID:3922320d-84db-4ca9-83a3-cd421710a917,ResourceVersion:4660017,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc002941187 0xc002941188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002941200} {node.kubernetes.io/unreachable Exists NoExecute 0xc002941220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.208: INFO: Pod "nginx-deployment-55fb7cb77f-vpvlx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vpvlx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-vpvlx,UID:50891c58-ed60-439a-a7e5-ef1515c96948,ResourceVersion:4660020,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc0029412a7 0xc0029412a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002941320} {node.kubernetes.io/unreachable Exists NoExecute 0xc002941340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.208: INFO: Pod "nginx-deployment-55fb7cb77f-wz2l7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wz2l7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-wz2l7,UID:cf929caa-7ec6-459d-b3fb-a8ffdb5924e1,ResourceVersion:4659937,Generation:0,CreationTimestamp:2020-04-10 13:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc0029413c7 0xc0029413c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002941440} {node.kubernetes.io/unreachable Exists NoExecute 0xc002941460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-10 13:02:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.208: INFO: Pod "nginx-deployment-55fb7cb77f-z5gkj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z5gkj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-55fb7cb77f-z5gkj,UID:692af4f8-17f2-4435-a2cb-7c8b90496c53,ResourceVersion:4659949,Generation:0,CreationTimestamp:2020-04-10 13:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e55f9454-eead-464a-ab4f-fb5d02d96a4c 0xc002941530 0xc002941531}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029415b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029415d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-10 13:02:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.208: INFO: Pod "nginx-deployment-7b8c6f4498-462vg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-462vg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-462vg,UID:ea69afc9-e0f8-45d3-97f3-fe2db182b1d8,ResourceVersion:4659901,Generation:0,CreationTimestamp:2020-04-10 13:02:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029416a0 0xc0029416a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002941710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002941730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.64,StartTime:2020-04-10 13:02:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-10 13:02:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4c6817f35effd51c15eed9d0e9a4157a577bc1572144a31f18bf43dd7a12159b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.209: INFO: Pod "nginx-deployment-7b8c6f4498-b4bgn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b4bgn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-b4bgn,UID:76f428b3-a35f-4072-b62c-8da33ce4606f,ResourceVersion:4659990,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc002941807 0xc002941808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002941880} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029418a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.209: INFO: Pod "nginx-deployment-7b8c6f4498-cnph2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cnph2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-cnph2,UID:19a42e0f-e87e-491e-bea7-c566d919dc2e,ResourceVersion:4660007,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc002941927 0xc002941928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029419a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029419c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.209: INFO: Pod "nginx-deployment-7b8c6f4498-czlgl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-czlgl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-czlgl,UID:c70fab8b-a684-4f73-8c74-7cf6c46ac97e,ResourceVersion:4659865,Generation:0,CreationTimestamp:2020-04-10 13:02:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc002941a47 0xc002941a48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002941ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002941ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.61,StartTime:2020-04-10 13:02:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-10 13:02:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2832ba54a8ab4d24a7b417858ecaca60d87e9f5e42e9ba14b539077f194c2676}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.209: INFO: Pod "nginx-deployment-7b8c6f4498-d6sfn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d6sfn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-d6sfn,UID:aac406ed-1036-4470-92e3-60d2e598db89,ResourceVersion:4660011,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc002941bb7 0xc002941bb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002941c30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002941c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.209: INFO: Pod "nginx-deployment-7b8c6f4498-dp4z2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dp4z2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-dp4z2,UID:4832ea6a-776d-4d4b-91a1-ef610a622552,ResourceVersion:4659855,Generation:0,CreationTimestamp:2020-04-10 13:02:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc002941cd7 0xc002941cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002941d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002941d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.107,StartTime:2020-04-10 13:02:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-10 13:02:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c1f2cb2f2080bad1bf0aa207ed17b97f31183132adc70621ab5b57bbf5b97019}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.210: INFO: Pod "nginx-deployment-7b8c6f4498-fw59w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fw59w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-fw59w,UID:132304f0-d0fa-4772-b9f1-99255ac7a9be,ResourceVersion:4659877,Generation:0,CreationTimestamp:2020-04-10 13:02:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc002941e47 0xc002941e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002941ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002941ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.108,StartTime:2020-04-10 13:02:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-10 13:02:34 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://67a7dcb58e3e740ebf6d63b82563758ee966dc5045849ad61fb595e40b112e21}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.210: INFO: Pod "nginx-deployment-7b8c6f4498-gkprr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gkprr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-gkprr,UID:c066fc88-f7a3-47a7-8ac6-a3b6c7ad47b6,ResourceVersion:4659896,Generation:0,CreationTimestamp:2020-04-10 13:02:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc002941fb7 0xc002941fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c2030} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c2050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.109,StartTime:2020-04-10 13:02:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-10 13:02:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://96d126f510ab81ad17b810b16b234dbe964a1e5c346a3f66de5933db6795fb9e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.210: INFO: Pod "nginx-deployment-7b8c6f4498-hcgkd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hcgkd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-hcgkd,UID:5be49ed5-ca6c-4e06-96e6-21d7d6439132,ResourceVersion:4659998,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c2127 0xc0029c2128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c21a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c21c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.210: INFO: Pod "nginx-deployment-7b8c6f4498-k8bqq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k8bqq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-k8bqq,UID:f1f565bb-d042-40b0-8c98-99517bea919e,ResourceVersion:4660012,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c2247 0xc0029c2248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c22c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c22e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.211: INFO: Pod "nginx-deployment-7b8c6f4498-mvj52" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mvj52,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-mvj52,UID:5774c1a6-e09e-49a8-aff0-3da13157bbe4,ResourceVersion:4660006,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c2367 0xc0029c2368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c23e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c2400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.211: INFO: Pod "nginx-deployment-7b8c6f4498-p24zv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p24zv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-p24zv,UID:04151f24-110d-4ac9-824c-cd787b40aab1,ResourceVersion:4660014,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c2487 0xc0029c2488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c2500} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c2520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.211: INFO: Pod "nginx-deployment-7b8c6f4498-qcmqj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qcmqj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-qcmqj,UID:1085c83b-f245-42ca-8778-9ca03baf654c,ResourceVersion:4660023,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c25a7 0xc0029c25a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c2620} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c2640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-10 13:02:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.212: INFO: Pod "nginx-deployment-7b8c6f4498-qs4xm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qs4xm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-qs4xm,UID:5659b947-6d20-4330-9c8b-2b15dd9f1f9b,ResourceVersion:4659860,Generation:0,CreationTimestamp:2020-04-10 13:02:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c2707 0xc0029c2708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c2780} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c27a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.106,StartTime:2020-04-10 13:02:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-10 13:02:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7c560b396ac3daf7c00ece3f9b735460f0becf519fe245f2ad279232a6f78711}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.212: INFO: Pod "nginx-deployment-7b8c6f4498-sfwsf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sfwsf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-sfwsf,UID:0f8cbca3-cad2-485d-8bb6-7c40e8613862,ResourceVersion:4660010,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c2877 0xc0029c2878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c28f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c2910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.212: INFO: Pod "nginx-deployment-7b8c6f4498-v5ln7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v5ln7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-v5ln7,UID:9d02703a-31f5-4ac4-b009-4fa30d96232f,ResourceVersion:4659886,Generation:0,CreationTimestamp:2020-04-10 13:02:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c2997 0xc0029c2998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c2a10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c2a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.62,StartTime:2020-04-10 13:02:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-10 13:02:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://540e42a9f66f12a98e5bd2974b601cf00902594cad1cd1565b5a314529e8c870}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.212: INFO: Pod "nginx-deployment-7b8c6f4498-vgfgc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vgfgc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-vgfgc,UID:ad450148-bb7b-42f4-ab29-8f1c581538f0,ResourceVersion:4660018,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c2b07 0xc0029c2b08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c2b80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c2ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.213: INFO: Pod "nginx-deployment-7b8c6f4498-wq9ws" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wq9ws,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-wq9ws,UID:e4530561-dd38-4277-a155-139914414d20,ResourceVersion:4659991,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c2c27 0xc0029c2c28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c2ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c2cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.213: INFO: Pod "nginx-deployment-7b8c6f4498-zssn9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zssn9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-zssn9,UID:6fba0c03-6092-4ff3-b956-09df946b9092,ResourceVersion:4659904,Generation:0,CreationTimestamp:2020-04-10 13:02:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c2d47 0xc0029c2d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c2dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c2de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.63,StartTime:2020-04-10 13:02:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-10 13:02:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c7b6ff2ea9f6a738b60d443d1318c18f6e3e410f7143c2a558a0fe1237bd165c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 13:02:41.213: INFO: Pod "nginx-deployment-7b8c6f4498-zvkjx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zvkjx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4431,SelfLink:/api/v1/namespaces/deployment-4431/pods/nginx-deployment-7b8c6f4498-zvkjx,UID:ad942706-2b05-4427-8191-f8e276a25f0d,ResourceVersion:4660008,Generation:0,CreationTimestamp:2020-04-10 13:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c21e58e9-0dad-4e6b-bdf3-8e6254384ada 0xc0029c2eb7 0xc0029c2eb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bsdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bsdrn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c2f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c2f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:02:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:02:41.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4431" for this suite. Apr 10 13:02:57.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:02:57.496: INFO: namespace deployment-4431 deletion completed in 16.216199417s • [SLOW TEST:29.666 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:02:57.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5829/configmap-test-63ddcded-8b64-446b-b3bf-103bbe77dd37 STEP: Creating a pod to test consume configMaps Apr 10 13:02:57.657: INFO: Waiting up to 5m0s for pod "pod-configmaps-758010a5-2b52-4bb4-a0c4-34c0b4d159d7" in namespace "configmap-5829" to be "success or failure" Apr 10 13:02:57.660: INFO: Pod "pod-configmaps-758010a5-2b52-4bb4-a0c4-34c0b4d159d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.381071ms Apr 10 13:02:59.665: INFO: Pod "pod-configmaps-758010a5-2b52-4bb4-a0c4-34c0b4d159d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008009398s Apr 10 13:03:01.669: INFO: Pod "pod-configmaps-758010a5-2b52-4bb4-a0c4-34c0b4d159d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012607806s STEP: Saw pod success Apr 10 13:03:01.669: INFO: Pod "pod-configmaps-758010a5-2b52-4bb4-a0c4-34c0b4d159d7" satisfied condition "success or failure" Apr 10 13:03:01.673: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-758010a5-2b52-4bb4-a0c4-34c0b4d159d7 container env-test: STEP: delete the pod Apr 10 13:03:01.708: INFO: Waiting for pod pod-configmaps-758010a5-2b52-4bb4-a0c4-34c0b4d159d7 to disappear Apr 10 13:03:01.746: INFO: Pod pod-configmaps-758010a5-2b52-4bb4-a0c4-34c0b4d159d7 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:03:01.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5829" for this suite. Apr 10 13:03:07.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:03:07.827: INFO: namespace configmap-5829 deletion completed in 6.076928758s • [SLOW TEST:10.331 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:03:07.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 10 13:03:07.867: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 10 13:03:07.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6742' Apr 10 13:03:08.223: INFO: stderr: "" Apr 10 13:03:08.223: INFO: stdout: "service/redis-slave created\n" Apr 10 13:03:08.224: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 10 13:03:08.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6742' Apr 10 13:03:08.593: INFO: stderr: "" Apr 10 13:03:08.593: INFO: stdout: "service/redis-master created\n" Apr 10 13:03:08.594: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 10 13:03:08.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6742' Apr 10 13:03:08.919: INFO: stderr: "" Apr 10 13:03:08.919: INFO: stdout: "service/frontend created\n" Apr 10 13:03:08.919: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 10 13:03:08.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6742' Apr 10 13:03:09.206: INFO: stderr: "" Apr 10 13:03:09.206: INFO: stdout: "deployment.apps/frontend created\n" Apr 10 13:03:09.207: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 10 13:03:09.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6742' Apr 10 13:03:09.524: INFO: stderr: "" Apr 10 13:03:09.524: INFO: stdout: "deployment.apps/redis-master created\n" Apr 10 13:03:09.525: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 10 13:03:09.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6742' Apr 10 13:03:09.826: INFO: stderr: "" Apr 10 13:03:09.826: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 10 13:03:09.826: INFO: Waiting for all frontend pods to be Running. Apr 10 13:03:19.877: INFO: Waiting for frontend to serve content. Apr 10 13:03:19.894: INFO: Trying to add a new entry to the guestbook. Apr 10 13:03:19.912: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 10 13:03:19.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6742' Apr 10 13:03:20.102: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 13:03:20.102: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 10 13:03:20.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6742' Apr 10 13:03:20.276: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 13:03:20.276: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 10 13:03:20.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6742' Apr 10 13:03:20.438: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 13:03:20.438: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 10 13:03:20.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6742' Apr 10 13:03:20.537: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 13:03:20.537: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 10 13:03:20.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6742' Apr 10 13:03:20.645: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 13:03:20.645: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 10 13:03:20.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6742' Apr 10 13:03:20.773: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 13:03:20.773: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:03:20.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6742" for this suite. Apr 10 13:04:02.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:04:02.912: INFO: namespace kubectl-6742 deletion completed in 42.111830159s • [SLOW TEST:55.085 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:04:02.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6027 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6027 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6027 Apr 10 13:04:03.019: INFO: Found 0 stateful pods, waiting for 1 Apr 10 13:04:13.024: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 10 13:04:13.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 10 13:04:13.300: INFO: stderr: "I0410 13:04:13.160689 346 log.go:172] (0xc00011edc0) (0xc000854640) Create stream\nI0410 13:04:13.160757 346 log.go:172] (0xc00011edc0) (0xc000854640) Stream added, broadcasting: 1\nI0410 13:04:13.164066 346 log.go:172] (0xc00011edc0) Reply frame received for 1\nI0410 13:04:13.164127 346 log.go:172] (0xc00011edc0) (0xc0006b0320) Create stream\nI0410 13:04:13.164152 346 log.go:172] (0xc00011edc0) (0xc0006b0320) Stream added, broadcasting: 3\nI0410 13:04:13.165238 346 log.go:172] (0xc00011edc0) Reply frame received for 3\nI0410 13:04:13.165272 346 log.go:172] (0xc00011edc0) (0xc0008546e0) Create stream\nI0410 13:04:13.165282 346 log.go:172] (0xc00011edc0) (0xc0008546e0) Stream added, broadcasting: 5\nI0410 13:04:13.166358 346 log.go:172] (0xc00011edc0) Reply frame received for 5\nI0410 13:04:13.265690 346 log.go:172] (0xc00011edc0) Data frame received for 5\nI0410 13:04:13.265723 346 log.go:172] (0xc0008546e0) (5) Data frame handling\nI0410 13:04:13.265734 346 log.go:172] (0xc0008546e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0410 13:04:13.292459 346 log.go:172] (0xc00011edc0) Data frame received for 3\nI0410 13:04:13.292495 346 log.go:172] (0xc0006b0320) (3) Data frame handling\nI0410 13:04:13.292526 346 log.go:172] (0xc0006b0320) (3) Data frame sent\nI0410 13:04:13.292757 346 log.go:172] (0xc00011edc0) Data frame received for 3\nI0410 13:04:13.292811 346 log.go:172] (0xc00011edc0) Data frame received for 5\nI0410 13:04:13.292867 346 log.go:172] (0xc0008546e0) (5) Data frame handling\nI0410 13:04:13.292911 346 log.go:172] (0xc0006b0320) (3) Data frame handling\nI0410 13:04:13.295212 346 log.go:172] (0xc00011edc0) Data frame received for 1\nI0410 13:04:13.295312 346 log.go:172] (0xc000854640) (1) Data frame handling\nI0410 13:04:13.295358 346 log.go:172] (0xc000854640) (1) Data frame sent\nI0410 13:04:13.295412 346 log.go:172] (0xc00011edc0) (0xc000854640) Stream removed, broadcasting: 1\nI0410 13:04:13.295496 346 log.go:172] (0xc00011edc0) Go away received\nI0410 13:04:13.296080 346 log.go:172] (0xc00011edc0) (0xc000854640) Stream removed, broadcasting: 1\nI0410 13:04:13.296109 346 log.go:172] (0xc00011edc0) (0xc0006b0320) Stream removed, broadcasting: 3\nI0410 13:04:13.296125 346 log.go:172] (0xc00011edc0) (0xc0008546e0) Stream removed, broadcasting: 5\n" Apr 10 13:04:13.301: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 10 13:04:13.301: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 10 13:04:13.305: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 10 13:04:23.311: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 10 13:04:23.311: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 13:04:23.325: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999174s Apr 10 13:04:24.330: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996665903s Apr 10 13:04:25.335: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.99175709s Apr 10 13:04:26.340: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986818037s Apr 10 13:04:27.344: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.981986949s Apr 10 13:04:28.349: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97735061s Apr 10 13:04:29.354: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.972514141s Apr 10 13:04:30.359: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.96759284s Apr 10 13:04:31.364: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.962840449s Apr 10 13:04:32.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 957.621219ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6027 Apr 10 13:04:33.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:04:33.632: INFO: stderr: "I0410 13:04:33.516064 365 log.go:172] (0xc00013f130) (0xc00060e960) Create stream\nI0410 13:04:33.516133 365 log.go:172] (0xc00013f130) (0xc00060e960) Stream added, broadcasting: 1\nI0410 13:04:33.519367 365 log.go:172] (0xc00013f130) Reply frame received for 1\nI0410 13:04:33.519446 365 log.go:172] (0xc00013f130) (0xc000856000) Create stream\nI0410 13:04:33.519499 365 log.go:172] (0xc00013f130) (0xc000856000) Stream added, broadcasting: 3\nI0410 13:04:33.521766 365 log.go:172] (0xc00013f130) Reply frame received for 3\nI0410 13:04:33.521805 365 log.go:172] (0xc00013f130) (0xc00060e1e0) Create stream\nI0410 13:04:33.521817 365 log.go:172] (0xc00013f130) (0xc00060e1e0) Stream added, broadcasting: 5\nI0410 13:04:33.522865 365 log.go:172] (0xc00013f130) Reply frame received for 5\nI0410 13:04:33.624625 365 log.go:172] (0xc00013f130) Data frame received for 5\nI0410 13:04:33.624647 365 log.go:172] (0xc00060e1e0) (5) Data frame handling\nI0410 13:04:33.624656 365 log.go:172] (0xc00060e1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0410 13:04:33.626400 365 log.go:172] (0xc00013f130) Data frame received for 5\nI0410 13:04:33.626428 365 log.go:172] (0xc00060e1e0) (5) Data frame handling\nI0410 13:04:33.626485 365 log.go:172] (0xc00013f130) Data frame received for 3\nI0410 13:04:33.626523 365 log.go:172] (0xc000856000) (3) Data frame handling\nI0410 13:04:33.626545 365 log.go:172] (0xc000856000) (3) Data frame sent\nI0410 13:04:33.626561 365 log.go:172] (0xc00013f130) Data frame received for 3\nI0410 13:04:33.626593 365 log.go:172] (0xc000856000) (3) Data frame handling\nI0410 13:04:33.627819 365 log.go:172] (0xc00013f130) Data frame received for 1\nI0410 13:04:33.627844 365 log.go:172] (0xc00060e960) (1) Data frame handling\nI0410 13:04:33.627861 365 log.go:172] (0xc00060e960) (1) Data frame sent\nI0410 13:04:33.627878 365 log.go:172] (0xc00013f130) (0xc00060e960) Stream removed, broadcasting: 1\nI0410 13:04:33.627897 365 log.go:172] (0xc00013f130) Go away received\nI0410 13:04:33.628232 365 log.go:172] (0xc00013f130) (0xc00060e960) Stream removed, broadcasting: 1\nI0410 13:04:33.628255 365 log.go:172] (0xc00013f130) (0xc000856000) Stream removed, broadcasting: 3\nI0410 13:04:33.628265 365 log.go:172] (0xc00013f130) (0xc00060e1e0) Stream removed, broadcasting: 5\n" Apr 10 13:04:33.632: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 10 13:04:33.632: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 10 13:04:33.636: INFO: Found 1 stateful pods, waiting for 3 Apr 10 13:04:43.643: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 13:04:43.643: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 13:04:43.643: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 10 13:04:43.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 10 13:04:43.891: INFO: stderr: "I0410 13:04:43.800451 385 log.go:172] (0xc0008c8420) (0xc000370820) Create stream\nI0410 13:04:43.800520 385 log.go:172] (0xc0008c8420) (0xc000370820) Stream added, broadcasting: 1\nI0410 13:04:43.803859 385 log.go:172] (0xc0008c8420) Reply frame received for 1\nI0410 13:04:43.803903 385 log.go:172] (0xc0008c8420) (0xc000370000) Create stream\nI0410 13:04:43.803927 385 log.go:172] (0xc0008c8420) (0xc000370000) Stream added, broadcasting: 3\nI0410 13:04:43.804798 385 log.go:172] (0xc0008c8420) Reply frame received for 3\nI0410 13:04:43.804833 385 log.go:172] (0xc0008c8420) (0xc0004821e0) Create stream\nI0410 13:04:43.804847 385 log.go:172] (0xc0008c8420) (0xc0004821e0) Stream added, broadcasting: 5\nI0410 13:04:43.805896 385 log.go:172] (0xc0008c8420) Reply frame received for 5\nI0410 13:04:43.885191 385 log.go:172] (0xc0008c8420) Data frame received for 3\nI0410 13:04:43.885237 385 log.go:172] (0xc000370000) (3) Data frame handling\nI0410 13:04:43.885246 385 log.go:172] (0xc000370000) (3) Data frame sent\nI0410 13:04:43.885252 385 log.go:172] (0xc0008c8420) Data frame received for 3\nI0410 13:04:43.885257 385 log.go:172] (0xc000370000) (3) Data frame handling\nI0410 13:04:43.885290 385 log.go:172] (0xc0008c8420) Data frame received for 5\nI0410 13:04:43.885296 385 log.go:172] (0xc0004821e0) (5) Data frame handling\nI0410 13:04:43.885302 385 log.go:172] (0xc0004821e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0410 13:04:43.885390 385 log.go:172] (0xc0008c8420) Data frame received for 5\nI0410 13:04:43.885421 385 log.go:172] (0xc0004821e0) (5) Data frame handling\nI0410 13:04:43.887281 385 log.go:172] (0xc0008c8420) Data frame received for 1\nI0410 13:04:43.887314 385 log.go:172] (0xc000370820) (1) Data frame handling\nI0410 13:04:43.887344 385 log.go:172] (0xc000370820) (1) Data frame sent\nI0410 13:04:43.887375 385 log.go:172] (0xc0008c8420) (0xc000370820) Stream removed, broadcasting: 1\nI0410 13:04:43.887400 385 log.go:172] (0xc0008c8420) Go away received\nI0410 13:04:43.887681 385 log.go:172] (0xc0008c8420) (0xc000370820) Stream removed, broadcasting: 1\nI0410 13:04:43.887696 385 log.go:172] (0xc0008c8420) (0xc000370000) Stream removed, broadcasting: 3\nI0410 13:04:43.887702 385 log.go:172] (0xc0008c8420) (0xc0004821e0) Stream removed, broadcasting: 5\n" Apr 10 13:04:43.891: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 10 13:04:43.891: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 10 13:04:43.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 10 13:04:44.106: INFO: stderr: "I0410 13:04:44.016195 407 log.go:172] (0xc0009d0420) (0xc00088a5a0) Create stream\nI0410 13:04:44.016264 407 log.go:172] (0xc0009d0420) (0xc00088a5a0) Stream added, broadcasting: 1\nI0410 13:04:44.018570 407 log.go:172] (0xc0009d0420) Reply frame received for 1\nI0410 13:04:44.018608 407 log.go:172] (0xc0009d0420) (0xc00096e000) Create stream\nI0410 13:04:44.018621 407 log.go:172] (0xc0009d0420) (0xc00096e000) Stream added, broadcasting: 3\nI0410 13:04:44.019787 407 log.go:172] (0xc0009d0420) Reply frame received for 3\nI0410 13:04:44.019835 407 log.go:172] (0xc0009d0420) (0xc00088a640) Create stream\nI0410 13:04:44.019851 407 log.go:172] (0xc0009d0420) (0xc00088a640) Stream added, broadcasting: 5\nI0410 13:04:44.020855 407 log.go:172] (0xc0009d0420) Reply frame received for 5\nI0410 13:04:44.075156 407 log.go:172] (0xc0009d0420) Data frame received for 5\nI0410 13:04:44.075188 407 log.go:172] (0xc00088a640) (5) Data frame handling\nI0410 13:04:44.075205 407 log.go:172] (0xc00088a640) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0410 13:04:44.099410 407 log.go:172] (0xc0009d0420) Data frame received for 5\nI0410 13:04:44.099453 407 log.go:172] (0xc00088a640) (5) Data frame handling\nI0410 13:04:44.099486 407 log.go:172] (0xc0009d0420) Data frame received for 3\nI0410 13:04:44.099510 407 log.go:172] (0xc00096e000) (3) Data frame handling\nI0410 13:04:44.099556 407 log.go:172] (0xc00096e000) (3) Data frame sent\nI0410 13:04:44.099583 407 log.go:172] (0xc0009d0420) Data frame received for 3\nI0410 13:04:44.099599 407 log.go:172] (0xc00096e000) (3) Data frame handling\nI0410 13:04:44.101685 407 log.go:172] (0xc0009d0420) Data frame received for 1\nI0410 13:04:44.101727 407 log.go:172] (0xc00088a5a0) (1) Data frame handling\nI0410 13:04:44.101755 407 log.go:172] (0xc00088a5a0) (1) Data frame sent\nI0410 13:04:44.101797 407 log.go:172] (0xc0009d0420) (0xc00088a5a0) Stream removed, broadcasting: 1\nI0410 13:04:44.101839 407 log.go:172] (0xc0009d0420) Go away received\nI0410 13:04:44.102170 407 log.go:172] (0xc0009d0420) (0xc00088a5a0) Stream removed, broadcasting: 1\nI0410 13:04:44.102194 407 log.go:172] (0xc0009d0420) (0xc00096e000) Stream removed, broadcasting: 3\nI0410 13:04:44.102203 407 log.go:172] (0xc0009d0420) (0xc00088a640) Stream removed, broadcasting: 5\n" Apr 10 13:04:44.106: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 10 13:04:44.106: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 10 13:04:44.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 10 13:04:44.364: INFO: stderr: "I0410 13:04:44.258362 427 log.go:172] (0xc000131080) (0xc00029eaa0) Create stream\nI0410 13:04:44.258416 427 log.go:172] (0xc000131080) (0xc00029eaa0) Stream added, broadcasting: 1\nI0410 13:04:44.261792 427 log.go:172] (0xc000131080) Reply frame received for 1\nI0410 13:04:44.261835 427 log.go:172] (0xc000131080) (0xc0006a2000) Create stream\nI0410 13:04:44.261845 427 log.go:172] (0xc000131080) (0xc0006a2000) Stream added, broadcasting: 3\nI0410 13:04:44.262619 427 log.go:172] (0xc000131080) Reply frame received for 3\nI0410 13:04:44.262651 427 log.go:172] (0xc000131080) (0xc00029e320) Create stream\nI0410 13:04:44.262666 427 log.go:172] (0xc000131080) (0xc00029e320) Stream added, broadcasting: 5\nI0410 13:04:44.263476 427 log.go:172] (0xc000131080) Reply frame received for 5\nI0410 13:04:44.325689 427 log.go:172] (0xc000131080) Data frame received for 5\nI0410 13:04:44.325716 427 log.go:172] (0xc00029e320) (5) Data frame handling\nI0410 13:04:44.325734 427 log.go:172] (0xc00029e320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0410 13:04:44.357338 427 log.go:172] (0xc000131080) Data frame received for 3\nI0410 13:04:44.357366 427 log.go:172] (0xc0006a2000) (3) Data frame handling\nI0410 13:04:44.357373 427 log.go:172] (0xc0006a2000) (3) Data frame sent\nI0410 13:04:44.357407 427 log.go:172] (0xc000131080) Data frame received for 5\nI0410 13:04:44.357432 427 log.go:172] (0xc00029e320) (5) Data frame handling\nI0410 13:04:44.357606 427 log.go:172] (0xc000131080) Data frame received for 3\nI0410 13:04:44.357631 427 log.go:172] (0xc0006a2000) (3) Data frame handling\nI0410 13:04:44.359379 427 log.go:172] (0xc000131080) Data frame received for 1\nI0410 13:04:44.359391 427 log.go:172] (0xc00029eaa0) (1) Data frame handling\nI0410 13:04:44.359404 427 log.go:172] (0xc00029eaa0) (1) Data frame sent\nI0410 13:04:44.359476 427 log.go:172] (0xc000131080) (0xc00029eaa0) Stream removed, broadcasting: 1\nI0410 13:04:44.359888 427 log.go:172] (0xc000131080) (0xc00029eaa0) Stream removed, broadcasting: 1\nI0410 13:04:44.359918 427 log.go:172] (0xc000131080) (0xc0006a2000) Stream removed, broadcasting: 3\nI0410 13:04:44.360130 427 log.go:172] (0xc000131080) (0xc00029e320) Stream removed, broadcasting: 5\n" Apr 10 13:04:44.365: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 10 13:04:44.365: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 10 13:04:44.365: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 13:04:44.368: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 10 13:04:54.377: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 10 13:04:54.377: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 10 13:04:54.377: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 10 13:04:54.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999599s Apr 10 13:04:55.396: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991510957s Apr 10 13:04:56.402: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987291122s Apr 10 13:04:57.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98162837s Apr 10 13:04:58.412: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976801258s Apr 10 13:04:59.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970737017s Apr 10 13:05:00.424: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965263108s Apr 10 13:05:01.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.959499341s Apr 10 13:05:02.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954543075s Apr 10 13:05:03.439: INFO: Verifying statefulset ss doesn't scale past 3 for another 949.499991ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6027 Apr 10 13:05:04.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:05:04.686: INFO: stderr: "I0410 13:05:04.575227 447 log.go:172] (0xc000116fd0) (0xc00020ab40) Create stream\nI0410 13:05:04.575273 447 log.go:172] (0xc000116fd0) (0xc00020ab40) Stream added, broadcasting: 1\nI0410 13:05:04.579114 447 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0410 13:05:04.579178 447 log.go:172] (0xc000116fd0) (0xc00083a000) Create stream\nI0410 13:05:04.579198 447 log.go:172] (0xc000116fd0) (0xc00083a000) Stream added, broadcasting: 3\nI0410 13:05:04.580585 447 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0410 13:05:04.580680 447 log.go:172] (0xc000116fd0) (0xc00083a0a0) Create stream\nI0410 13:05:04.580743 447 log.go:172] (0xc000116fd0) (0xc00083a0a0) Stream added, broadcasting: 5\nI0410 13:05:04.583569 447 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0410 13:05:04.680469 447 log.go:172] (0xc000116fd0) Data frame received for 5\nI0410 13:05:04.680529 447 log.go:172] (0xc00083a0a0) (5) Data frame handling\nI0410 13:05:04.680549 447 log.go:172] (0xc00083a0a0) (5) Data frame sent\nI0410 13:05:04.680562 447 log.go:172] (0xc000116fd0) Data frame received for 5\nI0410 13:05:04.680571 447 log.go:172] (0xc00083a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0410 13:05:04.680598 447 log.go:172] (0xc000116fd0) Data frame received for 3\nI0410 13:05:04.680613 447 log.go:172] (0xc00083a000) (3) Data frame handling\nI0410 13:05:04.680629 447 log.go:172] (0xc00083a000) (3) Data frame sent\nI0410 13:05:04.680638 447 log.go:172] (0xc000116fd0) Data frame received for 3\nI0410 13:05:04.680645 447 log.go:172] (0xc00083a000) (3) Data frame handling\nI0410 13:05:04.682007 447 log.go:172] (0xc000116fd0) Data frame received for 1\nI0410 13:05:04.682025 447 log.go:172] (0xc00020ab40) (1) Data frame handling\nI0410 13:05:04.682035 447 log.go:172] (0xc00020ab40) (1) Data frame sent\nI0410 13:05:04.682045 447 log.go:172] (0xc000116fd0) (0xc00020ab40) Stream removed, broadcasting: 1\nI0410 13:05:04.682065 447 log.go:172] (0xc000116fd0) Go away received\nI0410 13:05:04.682469 447 log.go:172] (0xc000116fd0) (0xc00020ab40) Stream removed, broadcasting: 1\nI0410 13:05:04.682487 447 log.go:172] (0xc000116fd0) (0xc00083a000) Stream removed, broadcasting: 3\nI0410 13:05:04.682497 447 log.go:172] (0xc000116fd0) (0xc00083a0a0) Stream removed, broadcasting: 5\n" Apr 10 13:05:04.687: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 10 13:05:04.687: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 10 13:05:04.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:05:04.883: INFO: stderr: "I0410 13:05:04.814937 468 log.go:172] (0xc00096c0b0) (0xc0009d2140) Create stream\nI0410 13:05:04.814992 468 log.go:172] (0xc00096c0b0) (0xc0009d2140) Stream added, broadcasting: 1\nI0410 13:05:04.817405 468 log.go:172] (0xc00096c0b0) Reply frame received for 1\nI0410 13:05:04.817454 468 log.go:172] (0xc00096c0b0) (0xc000622280) Create stream\nI0410 13:05:04.817467 468 log.go:172] (0xc00096c0b0) (0xc000622280) Stream added, broadcasting: 3\nI0410 13:05:04.818441 468 log.go:172] (0xc00096c0b0) Reply frame received for 3\nI0410 13:05:04.818480 468 log.go:172] (0xc00096c0b0) (0xc000338000) Create stream\nI0410 13:05:04.818494 468 log.go:172] (0xc00096c0b0) (0xc000338000) Stream added, broadcasting: 5\nI0410 13:05:04.819668 468 log.go:172] (0xc00096c0b0) Reply frame received for 5\nI0410 13:05:04.876953 468 log.go:172] (0xc00096c0b0) Data frame received for 5\nI0410 13:05:04.876977 468 log.go:172] (0xc000338000) (5) Data frame handling\nI0410 13:05:04.876988 468 log.go:172] (0xc000338000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0410 13:05:04.877442 468 log.go:172] (0xc00096c0b0) Data frame received for 3\nI0410 13:05:04.877471 468 log.go:172] (0xc000622280) (3) Data frame handling\nI0410 13:05:04.877480 468 log.go:172] (0xc000622280) (3) Data frame sent\nI0410 13:05:04.877906 468 log.go:172] (0xc00096c0b0) Data frame received for 5\nI0410 13:05:04.877981 468 log.go:172] (0xc000338000) (5) Data frame handling\nI0410 13:05:04.878010 468 log.go:172] (0xc00096c0b0) Data frame received for 3\nI0410 13:05:04.878034 468 log.go:172] (0xc000622280) (3) Data frame handling\nI0410 13:05:04.879252 468 log.go:172] (0xc00096c0b0) Data frame received for 1\nI0410 13:05:04.879264 468 log.go:172] (0xc0009d2140) (1) Data frame handling\nI0410 13:05:04.879270 468 log.go:172] (0xc0009d2140) (1) Data frame sent\nI0410 13:05:04.879277 468 log.go:172] (0xc00096c0b0) (0xc0009d2140) Stream removed, broadcasting: 1\nI0410 13:05:04.879299 468 log.go:172] (0xc00096c0b0) Go away received\nI0410 13:05:04.879476 468 log.go:172] (0xc00096c0b0) (0xc0009d2140) Stream removed, broadcasting: 1\nI0410 13:05:04.879485 468 log.go:172] (0xc00096c0b0) (0xc000622280) Stream removed, broadcasting: 3\nI0410 13:05:04.879490 468 log.go:172] (0xc00096c0b0) (0xc000338000) Stream removed, broadcasting: 5\n" Apr 10 13:05:04.883: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 10 13:05:04.883: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 10 13:05:04.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:05:05.162: INFO: rc: 137 Apr 10 13:05:05.162: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] '/tmp/index.html' -> '/usr/share/nginx/html/index.html' I0410 13:05:05.047706 490 log.go:172] (0xc00013adc0) (0xc0002ee820) Create stream I0410 13:05:05.047757 490 log.go:172] (0xc00013adc0) (0xc0002ee820) Stream added, broadcasting: 1 I0410 13:05:05.049702 490 log.go:172] (0xc00013adc0) Reply frame received for 1 I0410 13:05:05.049745 490 log.go:172] (0xc00013adc0) (0xc00083c000) Create stream I0410 13:05:05.049757 490 log.go:172] (0xc00013adc0) (0xc00083c000) Stream added, broadcasting: 3 I0410 13:05:05.050562 490 log.go:172] (0xc00013adc0) Reply frame received for 3 I0410 13:05:05.050584 490 log.go:172] (0xc00013adc0) (0xc0002ee8c0) Create stream I0410 13:05:05.050592 490 log.go:172] (0xc00013adc0) (0xc0002ee8c0) Stream added, broadcasting: 5 I0410 13:05:05.051286 490 log.go:172] (0xc00013adc0) Reply frame received for 5 I0410 13:05:05.112900 490 log.go:172] (0xc00013adc0) Data frame received for 3 I0410 13:05:05.112943 490 log.go:172] (0xc00083c000) (3) Data frame handling I0410 13:05:05.112956 490 log.go:172] (0xc00083c000) (3) Data frame sent I0410 13:05:05.112986 490 log.go:172] (0xc00013adc0) Data frame received for 5 I0410 13:05:05.113001 490 log.go:172] (0xc0002ee8c0) (5) Data frame handling I0410 13:05:05.113011 490 log.go:172] (0xc0002ee8c0) (5) Data frame sent + mv -v /tmp/index.html /usr/share/nginx/html/ I0410 13:05:05.151420 490 log.go:172] (0xc00013adc0) Data frame received for 3 I0410 13:05:05.151460 490 log.go:172] (0xc00083c000) (3) Data frame handling I0410 13:05:05.151494 490 log.go:172] (0xc00013adc0) Data frame received for 5 I0410 13:05:05.151506 490 log.go:172] (0xc0002ee8c0) (5) Data frame handling I0410 13:05:05.155595 490 log.go:172] (0xc00013adc0) Data frame received for 1 I0410 13:05:05.155654 490 log.go:172] (0xc0002ee820) (1) Data frame handling I0410 13:05:05.155683 490 log.go:172] (0xc0002ee820) (1) Data frame sent I0410 13:05:05.155717 490 log.go:172] (0xc00013adc0) (0xc0002ee820) Stream removed, broadcasting: 1 I0410 13:05:05.155750 490 log.go:172] (0xc00013adc0) Go away received I0410 13:05:05.156994 490 log.go:172] (0xc00013adc0) (0xc0002ee820) Stream removed, broadcasting: 1 I0410 13:05:05.157034 490 log.go:172] (0xc00013adc0) (0xc00083c000) Stream removed, broadcasting: 3 I0410 13:05:05.157075 490 log.go:172] (0xc00013adc0) (0xc0002ee8c0) Stream removed, broadcasting: 5 command terminated with exit code 137 [] 0xc001e66660 exit status 137 true [0xc0006cf1e0 0xc0006cf338 0xc0006cf3d8] [0xc0006cf1e0 0xc0006cf338 0xc0006cf3d8] [0xc0006cf278 0xc0006cf3c0] [0xba70e0 0xba70e0] 0xc002c46e40 }: Command stdout: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' stderr: I0410 13:05:05.047706 490 log.go:172] (0xc00013adc0) (0xc0002ee820) Create stream I0410 13:05:05.047757 490 log.go:172] (0xc00013adc0) (0xc0002ee820) Stream added, broadcasting: 1 I0410 13:05:05.049702 490 log.go:172] (0xc00013adc0) Reply frame received for 1 I0410 13:05:05.049745 490 log.go:172] (0xc00013adc0) (0xc00083c000) Create stream I0410 13:05:05.049757 490 log.go:172] (0xc00013adc0) (0xc00083c000) Stream added, broadcasting: 3 I0410 13:05:05.050562 490 log.go:172] (0xc00013adc0) Reply frame received for 3 I0410 13:05:05.050584 490 log.go:172] (0xc00013adc0) (0xc0002ee8c0) Create stream I0410 13:05:05.050592 490 log.go:172] (0xc00013adc0) (0xc0002ee8c0) Stream added, broadcasting: 5 I0410 13:05:05.051286 490 log.go:172] (0xc00013adc0) Reply frame received for 5 I0410 13:05:05.112900 490 log.go:172] (0xc00013adc0) Data frame received for 3 I0410 13:05:05.112943 490 log.go:172] (0xc00083c000) (3) Data frame handling I0410 13:05:05.112956 490 log.go:172] (0xc00083c000) (3) Data frame sent I0410 13:05:05.112986 490 log.go:172] (0xc00013adc0) Data frame received for 5 I0410 13:05:05.113001 490 log.go:172] (0xc0002ee8c0) (5) Data frame handling I0410 13:05:05.113011 490 log.go:172] (0xc0002ee8c0) (5) Data frame sent + mv -v /tmp/index.html /usr/share/nginx/html/ I0410 13:05:05.151420 490 log.go:172] (0xc00013adc0) Data frame received for 3 I0410 13:05:05.151460 490 log.go:172] (0xc00083c000) (3) Data frame handling I0410 13:05:05.151494 490 log.go:172] (0xc00013adc0) Data frame received for 5 I0410 13:05:05.151506 490 log.go:172] (0xc0002ee8c0) (5) Data frame handling I0410 13:05:05.155595 490 log.go:172] (0xc00013adc0) Data frame received for 1 I0410 13:05:05.155654 490 log.go:172] (0xc0002ee820) (1) Data frame handling I0410 13:05:05.155683 490 log.go:172] (0xc0002ee820) (1) Data frame sent I0410 13:05:05.155717 490 log.go:172] (0xc00013adc0) (0xc0002ee820) Stream removed, broadcasting: 1 I0410 13:05:05.155750 490 log.go:172] (0xc00013adc0) Go away received I0410 13:05:05.156994 490 log.go:172] (0xc00013adc0) (0xc0002ee820) Stream removed, broadcasting: 1 I0410 13:05:05.157034 490 log.go:172] (0xc00013adc0) (0xc00083c000) Stream removed, broadcasting: 3 I0410 13:05:05.157075 490 log.go:172] (0xc00013adc0) (0xc0002ee8c0) Stream removed, broadcasting: 5 command terminated with exit code 137 error: exit status 137 Apr 10 13:05:15.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:05:15.268: INFO: rc: 1 Apr 10 13:05:15.268: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0006d90e0 exit status 1 true [0xc000010668 0xc000010750 0xc0000107e0] [0xc000010668 0xc000010750 0xc0000107e0] [0xc000010710 0xc0000107c8] [0xba70e0 0xba70e0] 0xc001d05860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:05:25.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:05:25.368: INFO: rc: 1 Apr 10 13:05:25.368: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c13680 exit status 1 true [0xc0005ef368 0xc0005ef430 0xc0005ef460] [0xc0005ef368 0xc0005ef430 0xc0005ef460] [0xc0005ef3f8 0xc0005ef440] [0xba70e0 0xba70e0] 0xc00217fc80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:05:35.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:05:35.462: INFO: rc: 1 Apr 10 13:05:35.462: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e66720 exit status 1 true [0xc0006cf3e8 0xc0006cf448 0xc0006cf518] [0xc0006cf3e8 0xc0006cf448 0xc0006cf518] [0xc0006cf430 0xc0006cf4e8] [0xba70e0 0xba70e0] 0xc002c47140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:05:45.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:05:45.563: INFO: rc: 1 Apr 10 13:05:45.563: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e667e0 exit status 1 true [0xc0006cf5a0 0xc0006cf678 0xc0006cf730] [0xc0006cf5a0 0xc0006cf678 0xc0006cf730] [0xc0006cf620 0xc0006cf710] [0xba70e0 0xba70e0] 0xc002c47440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:05:55.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:05:55.665: INFO: rc: 1 Apr 10 13:05:55.665: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002ec0e70 exit status 1 true [0xc0025fe0c8 0xc0025fe0e0 0xc0025fe0f8] [0xc0025fe0c8 0xc0025fe0e0 0xc0025fe0f8] [0xc0025fe0d8 0xc0025fe0f0] [0xba70e0 0xba70e0] 0xc003077800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:06:05.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:06:05.762: INFO: rc: 1 Apr 10 13:06:05.762: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002ec0f60 exit status 1 true [0xc0025fe108 0xc0025fe120 0xc0025fe138] [0xc0025fe108 0xc0025fe120 0xc0025fe138] [0xc0025fe118 0xc0025fe130] [0xba70e0 0xba70e0] 0xc003077c20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:06:15.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:06:15.866: INFO: rc: 1 Apr 10 13:06:15.866: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e668d0 exit status 1 true [0xc0006cf7f0 0xc0006cf830 0xc0006cf900] [0xc0006cf7f0 0xc0006cf830 0xc0006cf900] [0xc0006cf818 0xc0006cf8d8] [0xba70e0 0xba70e0] 0xc002c47740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:06:25.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:06:25.958: INFO: rc: 1 Apr 10 13:06:25.958: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0006d9230 exit status 1 true [0xc000010868 0xc000010980 0xc000010a10] [0xc000010868 0xc000010980 0xc000010a10] [0xc000010950 0xc000010a08] [0xba70e0 0xba70e0] 0xc001c1bda0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:06:35.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:06:36.055: INFO: rc: 1 Apr 10 13:06:36.055: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002ec1020 exit status 1 true [0xc0025fe140 0xc0025fe158 0xc0025fe170] [0xc0025fe140 0xc0025fe158 0xc0025fe170] [0xc0025fe150 0xc0025fe168] [0xba70e0 0xba70e0] 0xc003077f20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:06:46.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:06:46.158: INFO: rc: 1 Apr 10 13:06:46.158: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002bf6090 exit status 1 true [0xc0005ee700 0xc0005ee7d8 0xc0005ee928] [0xc0005ee700 0xc0005ee7d8 0xc0005ee928] [0xc0005ee7b0 0xc0005ee8a8] [0xba70e0 0xba70e0] 0xc001d312c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:06:56.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:06:56.255: INFO: rc: 1 Apr 10 13:06:56.255: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002bf6180 exit status 1 true [0xc0005ee930 0xc0005eea08 0xc0005eeb00] [0xc0005ee930 0xc0005eea08 0xc0005eeb00] [0xc0005ee9a8 0xc0005eeab8] [0xba70e0 0xba70e0] 0xc001fb4c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:07:06.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:07:06.390: INFO: rc: 1 Apr 10 13:07:06.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002d50090 exit status 1 true [0xc000010210 0xc000010310 0xc0000103a0] [0xc000010210 0xc000010310 0xc0000103a0] [0xc000010290 0xc000010380] [0xba70e0 0xba70e0] 0xc0021191a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:07:16.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:07:16.491: INFO: rc: 1 Apr 10 13:07:16.491: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025e6a20 exit status 1 true [0xc0006ce218 0xc0006ce590 0xc0006ce720] [0xc0006ce218 0xc0006ce590 0xc0006ce720] [0xc0006ce390 0xc0006ce6e0] [0xba70e0 0xba70e0] 0xc002d180c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:07:26.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:07:26.596: INFO: rc: 1 Apr 10 13:07:26.596: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002d50150 exit status 1 true [0xc0000103a8 0xc0000104b0 0xc000010608] [0xc0000103a8 0xc0000104b0 0xc000010608] [0xc000010430 0xc0000105a8] [0xba70e0 0xba70e0] 0xc002c460c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:07:36.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:07:36.682: INFO: rc: 1 Apr 10 13:07:36.682: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025e6b40 exit status 1 true [0xc0006ce768 0xc0006ce8d8 0xc0006cee50] [0xc0006ce768 0xc0006ce8d8 0xc0006cee50] [0xc0006ce8c8 0xc0006cee10] [0xba70e0 0xba70e0] 0xc00217e780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:07:46.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:07:46.778: INFO: rc: 1 Apr 10 13:07:46.778: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c120c0 exit status 1 true [0xc0025fe000 0xc0025fe020 0xc0025fe038] [0xc0025fe000 0xc0025fe020 0xc0025fe038] [0xc0025fe010 0xc0025fe030] [0xba70e0 0xba70e0] 0xc003077020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:07:56.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:07:56.882: INFO: rc: 1 Apr 10 13:07:56.882: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c12180 exit status 1 true [0xc0025fe040 0xc0025fe058 0xc0025fe070] [0xc0025fe040 0xc0025fe058 0xc0025fe070] [0xc0025fe050 0xc0025fe068] [0xba70e0 0xba70e0] 0xc003077320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:08:06.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:08:06.975: INFO: rc: 1 Apr 10 13:08:06.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025e6c30 exit status 1 true [0xc0006cee88 0xc0006cef60 0xc0006ceff0] [0xc0006cee88 0xc0006cef60 0xc0006ceff0] [0xc0006cef18 0xc0006cefd0] [0xba70e0 0xba70e0] 0xc00217ea80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:08:16.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:08:17.068: INFO: rc: 1 Apr 10 13:08:17.068: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025e6cf0 exit status 1 true [0xc0006cf058 0xc0006cf1e0 0xc0006cf338] [0xc0006cf058 0xc0006cf1e0 0xc0006cf338] [0xc0006cf160 0xc0006cf278] [0xba70e0 0xba70e0] 0xc00217ede0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:08:27.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:08:29.354: INFO: rc: 1 Apr 10 13:08:29.355: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002d50240 exit status 1 true [0xc000010650 0xc000010710 0xc0000107c8] [0xc000010650 0xc000010710 0xc0000107c8] [0xc0000106e8 0xc0000107c0] [0xba70e0 0xba70e0] 0xc002c463c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:08:39.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:08:39.450: INFO: rc: 1 Apr 10 13:08:39.450: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025e6de0 exit status 1 true [0xc0006cf388 0xc0006cf3e8 0xc0006cf448] [0xc0006cf388 0xc0006cf3e8 0xc0006cf448] [0xc0006cf3d8 0xc0006cf430] [0xba70e0 0xba70e0] 0xc00217f0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:08:49.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:08:49.547: INFO: rc: 1 Apr 10 13:08:49.547: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002d500c0 exit status 1 true [0xc000010250 0xc000010330 0xc0000103a8] [0xc000010250 0xc000010330 0xc0000103a8] [0xc000010310 0xc0000103a0] [0xba70e0 0xba70e0] 0xc0022f8b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:08:59.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:08:59.643: INFO: rc: 1 Apr 10 13:08:59.643: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002d501b0 exit status 1 true [0xc0000103b0 0xc000010598 0xc000010650] [0xc0000103b0 0xc000010598 0xc000010650] [0xc0000104b0 0xc000010608] [0xba70e0 0xba70e0] 0xc0021191a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:09:09.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:09:09.738: INFO: rc: 1 Apr 10 13:09:09.738: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002bf60f0 exit status 1 true [0xc0006ce218 0xc0006ce590 0xc0006ce720] [0xc0006ce218 0xc0006ce590 0xc0006ce720] [0xc0006ce390 0xc0006ce6e0] [0xba70e0 0xba70e0] 0xc001d312c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:09:19.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:09:19.830: INFO: rc: 1 Apr 10 13:09:19.830: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c120f0 exit status 1 true [0xc0005ee620 0xc0005ee7b0 0xc0005ee8a8] [0xc0005ee620 0xc0005ee7b0 0xc0005ee8a8] [0xc0005ee760 0xc0005ee7f0] [0xba70e0 0xba70e0] 0xc002c46060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:09:29.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:09:29.934: INFO: rc: 1 Apr 10 13:09:29.934: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002d502a0 exit status 1 true [0xc000010668 0xc000010750 0xc0000107e0] [0xc000010668 0xc000010750 0xc0000107e0] [0xc000010710 0xc0000107c8] [0xba70e0 0xba70e0] 0xc00217e4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:09:39.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:09:40.040: INFO: rc: 1 Apr 10 13:09:40.040: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002d50360 exit status 1 true [0xc000010868 0xc000010980 0xc000010a10] [0xc000010868 0xc000010980 0xc000010a10] [0xc000010950 0xc000010a08] [0xba70e0 0xba70e0] 0xc00217e900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:09:50.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:09:50.130: INFO: rc: 1 Apr 10 13:09:50.130: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002bf61e0 exit status 1 true [0xc0006ce768 0xc0006ce8d8 0xc0006cee50] [0xc0006ce768 0xc0006ce8d8 0xc0006cee50] [0xc0006ce8c8 0xc0006cee10] [0xba70e0 0xba70e0] 0xc003076a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:10:00.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:10:00.233: INFO: rc: 1 Apr 10 13:10:00.233: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002d50450 exit status 1 true [0xc000010a30 0xc000010aa0 0xc000010b10] [0xc000010a30 0xc000010aa0 0xc000010b10] [0xc000010a88 0xc000010ac8] [0xba70e0 0xba70e0] 0xc00217ec60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 10 13:10:10.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:10:10.330: INFO: rc: 1 Apr 10 13:10:10.330: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Apr 10 13:10:10.330: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 10 13:10:10.340: INFO: Deleting all statefulset in ns statefulset-6027 Apr 10 13:10:10.342: INFO: Scaling statefulset ss to 0 Apr 10 13:10:10.350: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 13:10:10.352: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:10:10.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6027" for this suite. Apr 10 13:10:16.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:10:16.457: INFO: namespace statefulset-6027 deletion completed in 6.087514288s • [SLOW TEST:373.544 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:10:16.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-aed0c226-db62-405e-aa24-49dbfa4f5f4f STEP: Creating a pod to test consume configMaps Apr 10 13:10:16.514: INFO: Waiting up to 5m0s for pod "pod-configmaps-65049ea3-5544-42a4-86bf-8af2995269fb" in namespace "configmap-4157" to be "success or failure" Apr 10 13:10:16.518: INFO: Pod "pod-configmaps-65049ea3-5544-42a4-86bf-8af2995269fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.668962ms Apr 10 13:10:18.521: INFO: Pod "pod-configmaps-65049ea3-5544-42a4-86bf-8af2995269fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007508381s Apr 10 13:10:20.526: INFO: Pod "pod-configmaps-65049ea3-5544-42a4-86bf-8af2995269fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01163836s STEP: Saw pod success Apr 10 13:10:20.526: INFO: Pod "pod-configmaps-65049ea3-5544-42a4-86bf-8af2995269fb" satisfied condition "success or failure" Apr 10 13:10:20.528: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-65049ea3-5544-42a4-86bf-8af2995269fb container configmap-volume-test: STEP: delete the pod Apr 10 13:10:20.574: INFO: Waiting for pod pod-configmaps-65049ea3-5544-42a4-86bf-8af2995269fb to disappear Apr 10 13:10:20.578: INFO: Pod pod-configmaps-65049ea3-5544-42a4-86bf-8af2995269fb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:10:20.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4157" for this suite. Apr 10 13:10:26.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:10:26.671: INFO: namespace configmap-4157 deletion completed in 6.090297545s • [SLOW TEST:10.214 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:10:26.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:10:30.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9441" for this suite. Apr 10 13:10:36.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:10:36.833: INFO: namespace kubelet-test-9441 deletion completed in 6.092079398s • [SLOW TEST:10.162 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:10:36.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:10:40.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7792" for this suite. Apr 10 13:11:22.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:11:23.058: INFO: namespace kubelet-test-7792 deletion completed in 42.115298698s • [SLOW TEST:46.225 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:11:23.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 10 13:11:23.144: INFO: Waiting up to 5m0s for pod "client-containers-462b8f22-c757-4f80-85b8-d8bec13236ab" in namespace "containers-5947" to be "success or failure" Apr 10 13:11:23.172: INFO: Pod "client-containers-462b8f22-c757-4f80-85b8-d8bec13236ab": Phase="Pending", Reason="", readiness=false. Elapsed: 26.976674ms Apr 10 13:11:25.175: INFO: Pod "client-containers-462b8f22-c757-4f80-85b8-d8bec13236ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030653262s Apr 10 13:11:27.179: INFO: Pod "client-containers-462b8f22-c757-4f80-85b8-d8bec13236ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034715633s STEP: Saw pod success Apr 10 13:11:27.179: INFO: Pod "client-containers-462b8f22-c757-4f80-85b8-d8bec13236ab" satisfied condition "success or failure" Apr 10 13:11:27.182: INFO: Trying to get logs from node iruya-worker pod client-containers-462b8f22-c757-4f80-85b8-d8bec13236ab container test-container: STEP: delete the pod Apr 10 13:11:27.240: INFO: Waiting for pod client-containers-462b8f22-c757-4f80-85b8-d8bec13236ab to disappear Apr 10 13:11:27.267: INFO: Pod client-containers-462b8f22-c757-4f80-85b8-d8bec13236ab no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:11:27.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5947" for this suite. Apr 10 13:11:33.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:11:33.364: INFO: namespace containers-5947 deletion completed in 6.093501238s • [SLOW TEST:10.305 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:11:33.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-1b2dd5d4-3130-4f16-827b-802f3732e471 Apr 10 13:11:33.441: INFO: Pod name my-hostname-basic-1b2dd5d4-3130-4f16-827b-802f3732e471: Found 0 pods out of 1 Apr 10 13:11:38.446: INFO: Pod name my-hostname-basic-1b2dd5d4-3130-4f16-827b-802f3732e471: Found 1 pods out of 1 Apr 10 13:11:38.446: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1b2dd5d4-3130-4f16-827b-802f3732e471" are running Apr 10 13:11:38.448: INFO: Pod "my-hostname-basic-1b2dd5d4-3130-4f16-827b-802f3732e471-92jbd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 13:11:33 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 13:11:36 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 13:11:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 13:11:33 +0000 UTC Reason: Message:}]) Apr 10 13:11:38.449: INFO: Trying to dial the pod Apr 10 13:11:43.462: INFO: Controller my-hostname-basic-1b2dd5d4-3130-4f16-827b-802f3732e471: Got expected result from replica 1 [my-hostname-basic-1b2dd5d4-3130-4f16-827b-802f3732e471-92jbd]: "my-hostname-basic-1b2dd5d4-3130-4f16-827b-802f3732e471-92jbd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:11:43.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3401" for this suite. Apr 10 13:11:49.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:11:49.556: INFO: namespace replication-controller-3401 deletion completed in 6.090990544s • [SLOW TEST:16.192 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:11:49.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-d78ed7fa-3f0a-4aa2-bc2b-c2e7ae46af2f STEP: Creating configMap with name cm-test-opt-upd-837fdd50-5eab-4375-849e-c7201b347afa STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d78ed7fa-3f0a-4aa2-bc2b-c2e7ae46af2f STEP: Updating configmap cm-test-opt-upd-837fdd50-5eab-4375-849e-c7201b347afa STEP: Creating configMap with name cm-test-opt-create-38e18b06-0165-4332-bfde-5a54938a9ff1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:13:22.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3125" for this suite. Apr 10 13:13:44.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:13:44.225: INFO: namespace projected-3125 deletion completed in 22.103315937s • [SLOW TEST:114.668 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:13:44.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-4a9df02c-bdce-46a1-a9c5-ba8e9d8e10cb STEP: Creating a pod to test consume secrets Apr 10 13:13:44.300: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-89b7e65d-be96-425e-a891-2dfa7d1e8967" in namespace "projected-5529" to be "success or failure" Apr 10 13:13:44.305: INFO: Pod "pod-projected-secrets-89b7e65d-be96-425e-a891-2dfa7d1e8967": Phase="Pending", Reason="", readiness=false. Elapsed: 4.867059ms Apr 10 13:13:46.309: INFO: Pod "pod-projected-secrets-89b7e65d-be96-425e-a891-2dfa7d1e8967": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008485295s Apr 10 13:13:48.314: INFO: Pod "pod-projected-secrets-89b7e65d-be96-425e-a891-2dfa7d1e8967": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013190284s STEP: Saw pod success Apr 10 13:13:48.314: INFO: Pod "pod-projected-secrets-89b7e65d-be96-425e-a891-2dfa7d1e8967" satisfied condition "success or failure" Apr 10 13:13:48.317: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-89b7e65d-be96-425e-a891-2dfa7d1e8967 container projected-secret-volume-test: STEP: delete the pod Apr 10 13:13:48.348: INFO: Waiting for pod pod-projected-secrets-89b7e65d-be96-425e-a891-2dfa7d1e8967 to disappear Apr 10 13:13:48.360: INFO: Pod pod-projected-secrets-89b7e65d-be96-425e-a891-2dfa7d1e8967 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:13:48.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5529" for this suite. Apr 10 13:13:54.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:13:54.456: INFO: namespace projected-5529 deletion completed in 6.092297031s • [SLOW TEST:10.231 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:13:54.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 13:13:54.576: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ca0888f4-b5ec-4749-ac7f-c640d13d263c", Controller:(*bool)(0xc001578372), BlockOwnerDeletion:(*bool)(0xc001578373)}} Apr 10 13:13:54.600: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f58e4efa-977b-4dd4-9e0b-19b777e49969", Controller:(*bool)(0xc00093b732), BlockOwnerDeletion:(*bool)(0xc00093b733)}} Apr 10 13:13:54.677: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9752a34e-dd8b-4fee-b178-6c3318dcff0e", Controller:(*bool)(0xc0030d8b6a), BlockOwnerDeletion:(*bool)(0xc0030d8b6b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:13:59.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8699" for this suite. Apr 10 13:14:05.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:14:05.827: INFO: namespace gc-8699 deletion completed in 6.098951325s • [SLOW TEST:11.371 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:14:05.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 10 13:14:05.901: INFO: Waiting up to 5m0s for pod "pod-c0c92a68-7cec-488d-9bf7-5589985b5b07" in namespace "emptydir-3372" to be "success or failure" Apr 10 13:14:05.918: INFO: Pod "pod-c0c92a68-7cec-488d-9bf7-5589985b5b07": Phase="Pending", Reason="", readiness=false. Elapsed: 17.667672ms Apr 10 13:14:07.922: INFO: Pod "pod-c0c92a68-7cec-488d-9bf7-5589985b5b07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021686766s Apr 10 13:14:09.926: INFO: Pod "pod-c0c92a68-7cec-488d-9bf7-5589985b5b07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025428892s STEP: Saw pod success Apr 10 13:14:09.926: INFO: Pod "pod-c0c92a68-7cec-488d-9bf7-5589985b5b07" satisfied condition "success or failure" Apr 10 13:14:09.929: INFO: Trying to get logs from node iruya-worker pod pod-c0c92a68-7cec-488d-9bf7-5589985b5b07 container test-container: STEP: delete the pod Apr 10 13:14:10.038: INFO: Waiting for pod pod-c0c92a68-7cec-488d-9bf7-5589985b5b07 to disappear Apr 10 13:14:10.048: INFO: Pod pod-c0c92a68-7cec-488d-9bf7-5589985b5b07 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:14:10.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3372" for this suite. Apr 10 13:14:16.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:14:16.138: INFO: namespace emptydir-3372 deletion completed in 6.087684135s • [SLOW TEST:10.312 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:14:16.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 10 13:14:20.275: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:14:20.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-865" for this suite. Apr 10 13:14:26.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:14:26.395: INFO: namespace container-runtime-865 deletion completed in 6.086463476s • [SLOW TEST:10.255 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:14:26.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0410 13:15:07.070879 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 13:15:07.070: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:15:07.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7708" for this suite. Apr 10 13:15:15.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:15:15.149: INFO: namespace gc-7708 deletion completed in 8.074989311s • [SLOW TEST:48.754 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:15:15.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 13:15:15.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c99ca156-470a-44f9-bb08-aace301e43f4" in namespace "downward-api-9794" to be "success or failure" Apr 10 13:15:15.337: INFO: Pod "downwardapi-volume-c99ca156-470a-44f9-bb08-aace301e43f4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.50049ms Apr 10 13:15:17.341: INFO: Pod "downwardapi-volume-c99ca156-470a-44f9-bb08-aace301e43f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020013998s Apr 10 13:15:19.344: INFO: Pod "downwardapi-volume-c99ca156-470a-44f9-bb08-aace301e43f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023309712s STEP: Saw pod success Apr 10 13:15:19.344: INFO: Pod "downwardapi-volume-c99ca156-470a-44f9-bb08-aace301e43f4" satisfied condition "success or failure" Apr 10 13:15:19.347: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c99ca156-470a-44f9-bb08-aace301e43f4 container client-container: STEP: delete the pod Apr 10 13:15:19.372: INFO: Waiting for pod downwardapi-volume-c99ca156-470a-44f9-bb08-aace301e43f4 to disappear Apr 10 13:15:19.390: INFO: Pod downwardapi-volume-c99ca156-470a-44f9-bb08-aace301e43f4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:15:19.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9794" for this suite. Apr 10 13:15:25.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:15:25.479: INFO: namespace downward-api-9794 deletion completed in 6.085536441s • [SLOW TEST:10.329 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:15:25.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4534 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 10 13:15:25.578: INFO: Found 0 stateful pods, waiting for 3 Apr 10 13:15:35.583: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 13:15:35.583: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 13:15:35.583: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 10 13:15:35.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4534 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 10 13:15:35.844: INFO: stderr: "I0410 13:15:35.737976 1119 log.go:172] (0xc000994420) (0xc000350820) Create stream\nI0410 13:15:35.738063 1119 log.go:172] (0xc000994420) (0xc000350820) Stream added, broadcasting: 1\nI0410 13:15:35.741664 1119 log.go:172] (0xc000994420) Reply frame received for 1\nI0410 13:15:35.742022 1119 log.go:172] (0xc000994420) (0xc000a2a000) Create stream\nI0410 13:15:35.742063 1119 log.go:172] (0xc000994420) (0xc000a2a000) Stream added, broadcasting: 3\nI0410 13:15:35.743647 1119 log.go:172] (0xc000994420) Reply frame received for 3\nI0410 13:15:35.743738 1119 log.go:172] (0xc000994420) (0xc000806000) Create stream\nI0410 13:15:35.743920 1119 log.go:172] (0xc000994420) (0xc000806000) Stream added, broadcasting: 5\nI0410 13:15:35.745373 1119 log.go:172] (0xc000994420) Reply frame received for 5\nI0410 13:15:35.809868 1119 log.go:172] (0xc000994420) Data frame received for 5\nI0410 13:15:35.809913 1119 log.go:172] (0xc000806000) (5) Data frame handling\nI0410 13:15:35.809951 1119 log.go:172] (0xc000806000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0410 13:15:35.836796 1119 log.go:172] (0xc000994420) Data frame received for 3\nI0410 13:15:35.836820 1119 log.go:172] (0xc000a2a000) (3) Data frame handling\nI0410 13:15:35.836831 1119 log.go:172] (0xc000a2a000) (3) Data frame sent\nI0410 13:15:35.836994 1119 log.go:172] (0xc000994420) Data frame received for 3\nI0410 13:15:35.837005 1119 log.go:172] (0xc000a2a000) (3) Data frame handling\nI0410 13:15:35.837480 1119 log.go:172] (0xc000994420) Data frame received for 5\nI0410 13:15:35.837516 1119 log.go:172] (0xc000806000) (5) Data frame handling\nI0410 13:15:35.838921 1119 log.go:172] (0xc000994420) Data frame received for 1\nI0410 13:15:35.838933 1119 log.go:172] (0xc000350820) (1) Data frame handling\nI0410 13:15:35.838946 1119 log.go:172] (0xc000350820) (1) Data frame sent\nI0410 13:15:35.838955 1119 log.go:172] (0xc000994420) (0xc000350820) Stream removed, broadcasting: 1\nI0410 13:15:35.838965 1119 log.go:172] (0xc000994420) Go away received\nI0410 13:15:35.839507 1119 log.go:172] (0xc000994420) (0xc000350820) Stream removed, broadcasting: 1\nI0410 13:15:35.839532 1119 log.go:172] (0xc000994420) (0xc000a2a000) Stream removed, broadcasting: 3\nI0410 13:15:35.839544 1119 log.go:172] (0xc000994420) (0xc000806000) Stream removed, broadcasting: 5\n" Apr 10 13:15:35.845: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 10 13:15:35.845: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 10 13:15:45.875: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 10 13:15:55.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4534 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:15:56.181: INFO: stderr: "I0410 13:15:56.091702 1141 log.go:172] (0xc0006bcc60) (0xc000666aa0) Create stream\nI0410 13:15:56.091775 1141 log.go:172] (0xc0006bcc60) (0xc000666aa0) Stream added, broadcasting: 1\nI0410 13:15:56.095929 1141 log.go:172] (0xc0006bcc60) Reply frame received for 1\nI0410 13:15:56.095973 1141 log.go:172] (0xc0006bcc60) (0xc0006661e0) Create stream\nI0410 13:15:56.095985 1141 log.go:172] (0xc0006bcc60) (0xc0006661e0) Stream added, broadcasting: 3\nI0410 13:15:56.097273 1141 log.go:172] (0xc0006bcc60) Reply frame received for 3\nI0410 13:15:56.097342 1141 log.go:172] (0xc0006bcc60) (0xc00002c000) Create stream\nI0410 13:15:56.097361 1141 log.go:172] (0xc0006bcc60) (0xc00002c000) Stream added, broadcasting: 5\nI0410 13:15:56.098641 1141 log.go:172] (0xc0006bcc60) Reply frame received for 5\nI0410 13:15:56.176083 1141 log.go:172] (0xc0006bcc60) Data frame received for 5\nI0410 13:15:56.176118 1141 log.go:172] (0xc00002c000) (5) Data frame handling\nI0410 13:15:56.176129 1141 log.go:172] (0xc00002c000) (5) Data frame sent\nI0410 13:15:56.176136 1141 log.go:172] (0xc0006bcc60) Data frame received for 5\nI0410 13:15:56.176144 1141 log.go:172] (0xc00002c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0410 13:15:56.176167 1141 log.go:172] (0xc0006bcc60) Data frame received for 3\nI0410 13:15:56.176175 1141 log.go:172] (0xc0006661e0) (3) Data frame handling\nI0410 13:15:56.176183 1141 log.go:172] (0xc0006661e0) (3) Data frame sent\nI0410 13:15:56.176189 1141 log.go:172] (0xc0006bcc60) Data frame received for 3\nI0410 13:15:56.176196 1141 log.go:172] (0xc0006661e0) (3) Data frame handling\nI0410 13:15:56.177695 1141 log.go:172] (0xc0006bcc60) Data frame received for 1\nI0410 13:15:56.177725 1141 log.go:172] (0xc000666aa0) (1) Data frame handling\nI0410 13:15:56.177749 1141 log.go:172] (0xc000666aa0) (1) Data frame sent\nI0410 13:15:56.177765 1141 log.go:172] (0xc0006bcc60) (0xc000666aa0) Stream removed, broadcasting: 1\nI0410 13:15:56.177787 1141 log.go:172] (0xc0006bcc60) Go away received\nI0410 13:15:56.178159 1141 log.go:172] (0xc0006bcc60) (0xc000666aa0) Stream removed, broadcasting: 1\nI0410 13:15:56.178176 1141 log.go:172] (0xc0006bcc60) (0xc0006661e0) Stream removed, broadcasting: 3\nI0410 13:15:56.178184 1141 log.go:172] (0xc0006bcc60) (0xc00002c000) Stream removed, broadcasting: 5\n" Apr 10 13:15:56.181: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 10 13:15:56.181: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 10 13:16:16.279: INFO: Waiting for StatefulSet statefulset-4534/ss2 to complete update Apr 10 13:16:16.279: INFO: Waiting for Pod statefulset-4534/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 10 13:16:26.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4534 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 10 13:16:26.536: INFO: stderr: "I0410 13:16:26.428176 1162 log.go:172] (0xc0008926e0) (0xc0008768c0) Create stream\nI0410 13:16:26.428244 1162 log.go:172] (0xc0008926e0) (0xc0008768c0) Stream added, broadcasting: 1\nI0410 13:16:26.431250 1162 log.go:172] (0xc0008926e0) Reply frame received for 1\nI0410 13:16:26.431369 1162 log.go:172] (0xc0008926e0) (0xc0006b4280) Create stream\nI0410 13:16:26.431418 1162 log.go:172] (0xc0008926e0) (0xc0006b4280) Stream added, broadcasting: 3\nI0410 13:16:26.433293 1162 log.go:172] (0xc0008926e0) Reply frame received for 3\nI0410 13:16:26.433335 1162 log.go:172] (0xc0008926e0) (0xc0006b4320) Create stream\nI0410 13:16:26.433351 1162 log.go:172] (0xc0008926e0) (0xc0006b4320) Stream added, broadcasting: 5\nI0410 13:16:26.434370 1162 log.go:172] (0xc0008926e0) Reply frame received for 5\nI0410 13:16:26.500342 1162 log.go:172] (0xc0008926e0) Data frame received for 5\nI0410 13:16:26.500372 1162 log.go:172] (0xc0006b4320) (5) Data frame handling\nI0410 13:16:26.500393 1162 log.go:172] (0xc0006b4320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0410 13:16:26.528320 1162 log.go:172] (0xc0008926e0) Data frame received for 3\nI0410 13:16:26.528368 1162 log.go:172] (0xc0006b4280) (3) Data frame handling\nI0410 13:16:26.528405 1162 log.go:172] (0xc0006b4280) (3) Data frame sent\nI0410 13:16:26.528637 1162 log.go:172] (0xc0008926e0) Data frame received for 3\nI0410 13:16:26.528692 1162 log.go:172] (0xc0006b4280) (3) Data frame handling\nI0410 13:16:26.528717 1162 log.go:172] (0xc0008926e0) Data frame received for 5\nI0410 13:16:26.528732 1162 log.go:172] (0xc0006b4320) (5) Data frame handling\nI0410 13:16:26.530739 1162 log.go:172] (0xc0008926e0) Data frame received for 1\nI0410 13:16:26.530769 1162 log.go:172] (0xc0008768c0) (1) Data frame handling\nI0410 13:16:26.530787 1162 log.go:172] (0xc0008768c0) (1) Data frame sent\nI0410 13:16:26.530811 1162 log.go:172] (0xc0008926e0) (0xc0008768c0) Stream removed, broadcasting: 1\nI0410 13:16:26.530839 1162 log.go:172] (0xc0008926e0) Go away received\nI0410 13:16:26.531274 1162 log.go:172] (0xc0008926e0) (0xc0008768c0) Stream removed, broadcasting: 1\nI0410 13:16:26.531299 1162 log.go:172] (0xc0008926e0) (0xc0006b4280) Stream removed, broadcasting: 3\nI0410 13:16:26.531312 1162 log.go:172] (0xc0008926e0) (0xc0006b4320) Stream removed, broadcasting: 5\n" Apr 10 13:16:26.536: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 10 13:16:26.536: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 10 13:16:36.570: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 10 13:16:46.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4534 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 13:16:46.893: INFO: stderr: "I0410 13:16:46.783002 1182 log.go:172] (0xc00012ad10) (0xc0006f6960) Create stream\nI0410 13:16:46.783069 1182 log.go:172] (0xc00012ad10) (0xc0006f6960) Stream added, broadcasting: 1\nI0410 13:16:46.792097 1182 log.go:172] (0xc00012ad10) Reply frame received for 1\nI0410 13:16:46.792143 1182 log.go:172] (0xc00012ad10) (0xc0006e8000) Create stream\nI0410 13:16:46.792153 1182 log.go:172] (0xc00012ad10) (0xc0006e8000) Stream added, broadcasting: 3\nI0410 13:16:46.793484 1182 log.go:172] (0xc00012ad10) Reply frame received for 3\nI0410 13:16:46.793526 1182 log.go:172] (0xc00012ad10) (0xc0007ea000) Create stream\nI0410 13:16:46.793543 1182 log.go:172] (0xc00012ad10) (0xc0007ea000) Stream added, broadcasting: 5\nI0410 13:16:46.794316 1182 log.go:172] (0xc00012ad10) Reply frame received for 5\nI0410 13:16:46.883890 1182 log.go:172] (0xc00012ad10) Data frame received for 5\nI0410 13:16:46.883918 1182 log.go:172] (0xc0007ea000) (5) Data frame handling\nI0410 13:16:46.883936 1182 log.go:172] (0xc0007ea000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0410 13:16:46.887398 1182 log.go:172] (0xc00012ad10) Data frame received for 3\nI0410 13:16:46.887441 1182 log.go:172] (0xc0006e8000) (3) Data frame handling\nI0410 13:16:46.887473 1182 log.go:172] (0xc0006e8000) (3) Data frame sent\nI0410 13:16:46.887562 1182 log.go:172] (0xc00012ad10) Data frame received for 3\nI0410 13:16:46.887602 1182 log.go:172] (0xc0006e8000) (3) Data frame handling\nI0410 13:16:46.887730 1182 log.go:172] (0xc00012ad10) Data frame received for 5\nI0410 13:16:46.887757 1182 log.go:172] (0xc0007ea000) (5) Data frame handling\nI0410 13:16:46.889281 1182 log.go:172] (0xc00012ad10) Data frame received for 1\nI0410 13:16:46.889300 1182 log.go:172] (0xc0006f6960) (1) Data frame handling\nI0410 13:16:46.889322 1182 log.go:172] (0xc0006f6960) (1) Data frame sent\nI0410 13:16:46.889342 1182 log.go:172] (0xc00012ad10) (0xc0006f6960) Stream removed, broadcasting: 1\nI0410 13:16:46.889523 1182 log.go:172] (0xc00012ad10) Go away received\nI0410 13:16:46.889703 1182 log.go:172] (0xc00012ad10) (0xc0006f6960) Stream removed, broadcasting: 1\nI0410 13:16:46.889718 1182 log.go:172] (0xc00012ad10) (0xc0006e8000) Stream removed, broadcasting: 3\nI0410 13:16:46.889727 1182 log.go:172] (0xc00012ad10) (0xc0007ea000) Stream removed, broadcasting: 5\n" Apr 10 13:16:46.894: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 10 13:16:46.894: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 10 13:17:06.914: INFO: Waiting for StatefulSet statefulset-4534/ss2 to complete update Apr 10 13:17:06.914: INFO: Waiting for Pod statefulset-4534/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 10 13:17:16.923: INFO: Deleting all statefulset in ns statefulset-4534 Apr 10 13:17:16.926: INFO: Scaling statefulset ss2 to 0 Apr 10 13:17:46.942: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 13:17:46.944: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:17:46.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4534" for this suite. Apr 10 13:17:52.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:17:53.057: INFO: namespace statefulset-4534 deletion completed in 6.084059748s • [SLOW TEST:147.578 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:17:53.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 13:17:53.115: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:17:57.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5651" for this suite. Apr 10 13:18:47.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:18:47.387: INFO: namespace pods-5651 deletion completed in 50.107462396s • [SLOW TEST:54.330 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:18:47.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-bef72c87-5d93-44ad-a177-f6aebee00e63 STEP: Creating a pod to test consume configMaps Apr 10 13:18:47.502: INFO: Waiting up to 5m0s for pod "pod-configmaps-e1c1e1a6-ba48-49e0-b380-4346a65026f2" in namespace "configmap-6319" to be "success or failure" Apr 10 13:18:47.504: INFO: Pod "pod-configmaps-e1c1e1a6-ba48-49e0-b380-4346a65026f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.591123ms Apr 10 13:18:49.517: INFO: Pod "pod-configmaps-e1c1e1a6-ba48-49e0-b380-4346a65026f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015597578s Apr 10 13:18:51.522: INFO: Pod "pod-configmaps-e1c1e1a6-ba48-49e0-b380-4346a65026f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020501717s STEP: Saw pod success Apr 10 13:18:51.522: INFO: Pod "pod-configmaps-e1c1e1a6-ba48-49e0-b380-4346a65026f2" satisfied condition "success or failure" Apr 10 13:18:51.526: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-e1c1e1a6-ba48-49e0-b380-4346a65026f2 container configmap-volume-test: STEP: delete the pod Apr 10 13:18:51.539: INFO: Waiting for pod pod-configmaps-e1c1e1a6-ba48-49e0-b380-4346a65026f2 to disappear Apr 10 13:18:51.544: INFO: Pod pod-configmaps-e1c1e1a6-ba48-49e0-b380-4346a65026f2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:18:51.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6319" for this suite. Apr 10 13:18:57.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:18:57.642: INFO: namespace configmap-6319 deletion completed in 6.095658048s • [SLOW TEST:10.255 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:18:57.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-7379 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7379 to expose endpoints map[] Apr 10 13:18:57.807: INFO: Get endpoints failed (22.431541ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 10 13:18:58.811: INFO: successfully validated that service endpoint-test2 in namespace services-7379 exposes endpoints map[] (1.026677214s elapsed) STEP: Creating pod pod1 in namespace services-7379 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7379 to expose endpoints map[pod1:[80]] Apr 10 13:19:02.856: INFO: successfully validated that service endpoint-test2 in namespace services-7379 exposes endpoints map[pod1:[80]] (4.037752886s elapsed) STEP: Creating pod pod2 in namespace services-7379 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7379 to expose endpoints map[pod1:[80] pod2:[80]] Apr 10 13:19:05.940: INFO: successfully validated that service endpoint-test2 in namespace services-7379 exposes endpoints map[pod1:[80] pod2:[80]] (3.078705334s elapsed) STEP: Deleting pod pod1 in namespace services-7379 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7379 to expose endpoints map[pod2:[80]] Apr 10 13:19:07.007: INFO: successfully validated that service endpoint-test2 in namespace services-7379 exposes endpoints map[pod2:[80]] (1.062779484s elapsed) STEP: Deleting pod pod2 in namespace services-7379 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7379 to expose endpoints map[] Apr 10 13:19:08.032: INFO: successfully validated that service endpoint-test2 in namespace services-7379 exposes endpoints map[] (1.020622225s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:19:08.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7379" for this suite. Apr 10 13:19:14.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:19:14.158: INFO: namespace services-7379 deletion completed in 6.081860725s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:16.516 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:19:14.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 13:19:14.203: INFO: Creating ReplicaSet my-hostname-basic-ac5dfaa5-776c-4c7b-9c35-dada0e8cf73c Apr 10 13:19:14.227: INFO: Pod name my-hostname-basic-ac5dfaa5-776c-4c7b-9c35-dada0e8cf73c: Found 0 pods out of 1 Apr 10 13:19:19.232: INFO: Pod name my-hostname-basic-ac5dfaa5-776c-4c7b-9c35-dada0e8cf73c: Found 1 pods out of 1 Apr 10 13:19:19.232: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ac5dfaa5-776c-4c7b-9c35-dada0e8cf73c" is running Apr 10 13:19:19.235: INFO: Pod "my-hostname-basic-ac5dfaa5-776c-4c7b-9c35-dada0e8cf73c-6fvzw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 13:19:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 13:19:17 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 13:19:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 13:19:14 +0000 UTC Reason: Message:}]) Apr 10 13:19:19.235: INFO: Trying to dial the pod Apr 10 13:19:24.247: INFO: Controller my-hostname-basic-ac5dfaa5-776c-4c7b-9c35-dada0e8cf73c: Got expected result from replica 1 [my-hostname-basic-ac5dfaa5-776c-4c7b-9c35-dada0e8cf73c-6fvzw]: "my-hostname-basic-ac5dfaa5-776c-4c7b-9c35-dada0e8cf73c-6fvzw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:19:24.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8247" for this suite. Apr 10 13:19:30.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:19:30.360: INFO: namespace replicaset-8247 deletion completed in 6.109000915s • [SLOW TEST:16.201 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:19:30.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:19:30.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9385" for this suite. Apr 10 13:19:36.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:19:36.601: INFO: namespace services-9385 deletion completed in 6.182297176s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.241 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:19:36.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-2f7a803b-d90b-4f30-a35a-808d3c892d14 STEP: Creating a pod to test consume configMaps Apr 10 13:19:36.712: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-acb5e2ee-3bfd-43b8-b8cd-16cf59301be3" in namespace "projected-8422" to be "success or failure" Apr 10 13:19:36.722: INFO: Pod "pod-projected-configmaps-acb5e2ee-3bfd-43b8-b8cd-16cf59301be3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.906786ms Apr 10 13:19:38.734: INFO: Pod "pod-projected-configmaps-acb5e2ee-3bfd-43b8-b8cd-16cf59301be3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022185437s Apr 10 13:19:40.739: INFO: Pod "pod-projected-configmaps-acb5e2ee-3bfd-43b8-b8cd-16cf59301be3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026435073s STEP: Saw pod success Apr 10 13:19:40.739: INFO: Pod "pod-projected-configmaps-acb5e2ee-3bfd-43b8-b8cd-16cf59301be3" satisfied condition "success or failure" Apr 10 13:19:40.742: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-acb5e2ee-3bfd-43b8-b8cd-16cf59301be3 container projected-configmap-volume-test: STEP: delete the pod Apr 10 13:19:40.759: INFO: Waiting for pod pod-projected-configmaps-acb5e2ee-3bfd-43b8-b8cd-16cf59301be3 to disappear Apr 10 13:19:40.764: INFO: Pod pod-projected-configmaps-acb5e2ee-3bfd-43b8-b8cd-16cf59301be3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:19:40.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8422" for this suite. Apr 10 13:19:46.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:19:46.880: INFO: namespace projected-8422 deletion completed in 6.11254314s • [SLOW TEST:10.278 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:19:46.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 13:19:46.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48a270d2-02f0-492c-b0f7-f53e17d67e1c" in namespace "downward-api-3386" to be "success or failure" Apr 10 13:19:46.964: INFO: Pod "downwardapi-volume-48a270d2-02f0-492c-b0f7-f53e17d67e1c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.087983ms Apr 10 13:19:48.969: INFO: Pod "downwardapi-volume-48a270d2-02f0-492c-b0f7-f53e17d67e1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008270554s Apr 10 13:19:50.974: INFO: Pod "downwardapi-volume-48a270d2-02f0-492c-b0f7-f53e17d67e1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012913903s STEP: Saw pod success Apr 10 13:19:50.974: INFO: Pod "downwardapi-volume-48a270d2-02f0-492c-b0f7-f53e17d67e1c" satisfied condition "success or failure" Apr 10 13:19:50.977: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-48a270d2-02f0-492c-b0f7-f53e17d67e1c container client-container: STEP: delete the pod Apr 10 13:19:51.011: INFO: Waiting for pod downwardapi-volume-48a270d2-02f0-492c-b0f7-f53e17d67e1c to disappear Apr 10 13:19:51.018: INFO: Pod downwardapi-volume-48a270d2-02f0-492c-b0f7-f53e17d67e1c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:19:51.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3386" for this suite. Apr 10 13:19:57.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:19:57.132: INFO: namespace downward-api-3386 deletion completed in 6.109776332s • [SLOW TEST:10.252 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:19:57.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2185 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 10 13:19:57.180: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 10 13:20:23.309: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.149:8080/dial?request=hostName&protocol=http&host=10.244.2.103&port=8080&tries=1'] Namespace:pod-network-test-2185 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:20:23.309: INFO: >>> kubeConfig: /root/.kube/config I0410 13:20:23.342787 6 log.go:172] (0xc003152580) (0xc0031ecdc0) Create stream I0410 13:20:23.342829 6 log.go:172] (0xc003152580) (0xc0031ecdc0) Stream added, broadcasting: 1 I0410 13:20:23.346173 6 log.go:172] (0xc003152580) Reply frame received for 1 I0410 13:20:23.346220 6 log.go:172] (0xc003152580) (0xc00020ed20) Create stream I0410 13:20:23.346237 6 log.go:172] (0xc003152580) (0xc00020ed20) Stream added, broadcasting: 3 I0410 13:20:23.347413 6 log.go:172] (0xc003152580) Reply frame received for 3 I0410 13:20:23.347464 6 log.go:172] (0xc003152580) (0xc0031ece60) Create stream I0410 13:20:23.347477 6 log.go:172] (0xc003152580) (0xc0031ece60) Stream added, broadcasting: 5 I0410 13:20:23.348651 6 log.go:172] (0xc003152580) Reply frame received for 5 I0410 13:20:23.432548 6 log.go:172] (0xc003152580) Data frame received for 3 I0410 13:20:23.432589 6 log.go:172] (0xc00020ed20) (3) Data frame handling I0410 13:20:23.432636 6 log.go:172] (0xc00020ed20) (3) Data frame sent I0410 13:20:23.433812 6 log.go:172] (0xc003152580) Data frame received for 3 I0410 13:20:23.433908 6 log.go:172] (0xc00020ed20) (3) Data frame handling I0410 13:20:23.434091 6 log.go:172] (0xc003152580) Data frame received for 5 I0410 13:20:23.434126 6 log.go:172] (0xc0031ece60) (5) Data frame handling I0410 13:20:23.436179 6 log.go:172] (0xc003152580) Data frame received for 1 I0410 13:20:23.436211 6 log.go:172] (0xc0031ecdc0) (1) Data frame handling I0410 13:20:23.436228 6 log.go:172] (0xc0031ecdc0) (1) Data frame sent I0410 13:20:23.436248 6 log.go:172] (0xc003152580) (0xc0031ecdc0) Stream removed, broadcasting: 1 I0410 13:20:23.436293 6 log.go:172] (0xc003152580) Go away received I0410 13:20:23.436459 6 log.go:172] (0xc003152580) (0xc0031ecdc0) Stream removed, broadcasting: 1 I0410 13:20:23.436492 6 log.go:172] (0xc003152580) (0xc00020ed20) Stream removed, broadcasting: 3 I0410 13:20:23.436517 6 log.go:172] (0xc003152580) (0xc0031ece60) Stream removed, broadcasting: 5 Apr 10 13:20:23.436: INFO: Waiting for endpoints: map[] Apr 10 13:20:23.440: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.149:8080/dial?request=hostName&protocol=http&host=10.244.1.148&port=8080&tries=1'] Namespace:pod-network-test-2185 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:20:23.440: INFO: >>> kubeConfig: /root/.kube/config I0410 13:20:23.465847 6 log.go:172] (0xc001e9d3f0) (0xc0025f86e0) Create stream I0410 13:20:23.465874 6 log.go:172] (0xc001e9d3f0) (0xc0025f86e0) Stream added, broadcasting: 1 I0410 13:20:23.467988 6 log.go:172] (0xc001e9d3f0) Reply frame received for 1 I0410 13:20:23.468028 6 log.go:172] (0xc001e9d3f0) (0xc0025f8780) Create stream I0410 13:20:23.468047 6 log.go:172] (0xc001e9d3f0) (0xc0025f8780) Stream added, broadcasting: 3 I0410 13:20:23.469020 6 log.go:172] (0xc001e9d3f0) Reply frame received for 3 I0410 13:20:23.469056 6 log.go:172] (0xc001e9d3f0) (0xc002f1a1e0) Create stream I0410 13:20:23.469069 6 log.go:172] (0xc001e9d3f0) (0xc002f1a1e0) Stream added, broadcasting: 5 I0410 13:20:23.470152 6 log.go:172] (0xc001e9d3f0) Reply frame received for 5 I0410 13:20:23.540800 6 log.go:172] (0xc001e9d3f0) Data frame received for 3 I0410 13:20:23.540836 6 log.go:172] (0xc0025f8780) (3) Data frame handling I0410 13:20:23.540864 6 log.go:172] (0xc0025f8780) (3) Data frame sent I0410 13:20:23.542074 6 log.go:172] (0xc001e9d3f0) Data frame received for 3 I0410 13:20:23.542094 6 log.go:172] (0xc0025f8780) (3) Data frame handling I0410 13:20:23.542148 6 log.go:172] (0xc001e9d3f0) Data frame received for 5 I0410 13:20:23.542161 6 log.go:172] (0xc002f1a1e0) (5) Data frame handling I0410 13:20:23.543863 6 log.go:172] (0xc001e9d3f0) Data frame received for 1 I0410 13:20:23.543884 6 log.go:172] (0xc0025f86e0) (1) Data frame handling I0410 13:20:23.543901 6 log.go:172] (0xc0025f86e0) (1) Data frame sent I0410 13:20:23.543915 6 log.go:172] (0xc001e9d3f0) (0xc0025f86e0) Stream removed, broadcasting: 1 I0410 13:20:23.543932 6 log.go:172] (0xc001e9d3f0) Go away received I0410 13:20:23.544064 6 log.go:172] (0xc001e9d3f0) (0xc0025f86e0) Stream removed, broadcasting: 1 I0410 13:20:23.544113 6 log.go:172] (0xc001e9d3f0) (0xc0025f8780) Stream removed, broadcasting: 3 I0410 13:20:23.544132 6 log.go:172] (0xc001e9d3f0) (0xc002f1a1e0) Stream removed, broadcasting: 5 Apr 10 13:20:23.544: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:20:23.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2185" for this suite. Apr 10 13:20:47.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:20:47.699: INFO: namespace pod-network-test-2185 deletion completed in 24.151382523s • [SLOW TEST:50.566 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:20:47.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8332.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8332.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 13:20:53.880: INFO: DNS probes using dns-test-f5b939b6-0dfc-474a-8543-92365afa9e72 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8332.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8332.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 13:20:59.968: INFO: File wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local from pod dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 13:20:59.971: INFO: File jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local from pod dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 13:20:59.971: INFO: Lookups using dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe failed for: [wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local] Apr 10 13:21:04.989: INFO: File wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local from pod dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 13:21:04.992: INFO: File jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local from pod dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 13:21:04.992: INFO: Lookups using dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe failed for: [wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local] Apr 10 13:21:09.977: INFO: File wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local from pod dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 13:21:09.981: INFO: File jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local from pod dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 13:21:09.981: INFO: Lookups using dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe failed for: [wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local] Apr 10 13:21:14.982: INFO: File wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local from pod dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 13:21:14.985: INFO: File jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local from pod dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 13:21:14.985: INFO: Lookups using dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe failed for: [wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local] Apr 10 13:21:19.977: INFO: File wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local from pod dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 13:21:19.980: INFO: File jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local from pod dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 13:21:19.980: INFO: Lookups using dns-8332/dns-test-a6854837-4b77-4685-ae15-5576121e2efe failed for: [wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local] Apr 10 13:21:24.977: INFO: DNS probes using dns-test-a6854837-4b77-4685-ae15-5576121e2efe succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8332.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8332.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8332.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8332.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 13:21:31.669: INFO: DNS probes using dns-test-37b1dc61-4bc5-4751-851d-ff916f7cc62d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:21:31.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8332" for this suite. Apr 10 13:21:37.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:21:37.820: INFO: namespace dns-8332 deletion completed in 6.070984145s • [SLOW TEST:50.121 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:21:37.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-e1828adf-1874-4d84-bad9-d1510a38910a STEP: Creating the pod STEP: Updating configmap configmap-test-upd-e1828adf-1874-4d84-bad9-d1510a38910a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:23:12.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9675" for this suite. Apr 10 13:23:34.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:23:34.514: INFO: namespace configmap-9675 deletion completed in 22.101958493s • [SLOW TEST:116.693 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:23:34.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 10 13:23:34.612: INFO: Waiting up to 5m0s for pod "var-expansion-e2794cb4-6a53-4627-a76a-b573e8b1183f" in namespace "var-expansion-5087" to be "success or failure" Apr 10 13:23:34.632: INFO: Pod "var-expansion-e2794cb4-6a53-4627-a76a-b573e8b1183f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.308178ms Apr 10 13:23:36.649: INFO: Pod "var-expansion-e2794cb4-6a53-4627-a76a-b573e8b1183f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036994698s Apr 10 13:23:38.654: INFO: Pod "var-expansion-e2794cb4-6a53-4627-a76a-b573e8b1183f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041519957s STEP: Saw pod success Apr 10 13:23:38.654: INFO: Pod "var-expansion-e2794cb4-6a53-4627-a76a-b573e8b1183f" satisfied condition "success or failure" Apr 10 13:23:38.657: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-e2794cb4-6a53-4627-a76a-b573e8b1183f container dapi-container: STEP: delete the pod Apr 10 13:23:38.686: INFO: Waiting for pod var-expansion-e2794cb4-6a53-4627-a76a-b573e8b1183f to disappear Apr 10 13:23:38.699: INFO: Pod var-expansion-e2794cb4-6a53-4627-a76a-b573e8b1183f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:23:38.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5087" for this suite. Apr 10 13:23:44.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:23:44.848: INFO: namespace var-expansion-5087 deletion completed in 6.145987727s • [SLOW TEST:10.334 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:23:44.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-5adcbc8d-e7c2-4dd5-ae01-5a26da5caf5f STEP: Creating a pod to test consume secrets Apr 10 13:23:44.915: INFO: Waiting up to 5m0s for pod "pod-secrets-5c295821-2a38-4934-8be5-92ea127d2362" in namespace "secrets-8511" to be "success or failure" Apr 10 13:23:44.928: INFO: Pod "pod-secrets-5c295821-2a38-4934-8be5-92ea127d2362": Phase="Pending", Reason="", readiness=false. Elapsed: 12.296194ms Apr 10 13:23:46.931: INFO: Pod "pod-secrets-5c295821-2a38-4934-8be5-92ea127d2362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015870629s Apr 10 13:23:48.935: INFO: Pod "pod-secrets-5c295821-2a38-4934-8be5-92ea127d2362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019984037s STEP: Saw pod success Apr 10 13:23:48.935: INFO: Pod "pod-secrets-5c295821-2a38-4934-8be5-92ea127d2362" satisfied condition "success or failure" Apr 10 13:23:48.938: INFO: Trying to get logs from node iruya-worker pod pod-secrets-5c295821-2a38-4934-8be5-92ea127d2362 container secret-volume-test: STEP: delete the pod Apr 10 13:23:48.969: INFO: Waiting for pod pod-secrets-5c295821-2a38-4934-8be5-92ea127d2362 to disappear Apr 10 13:23:48.980: INFO: Pod pod-secrets-5c295821-2a38-4934-8be5-92ea127d2362 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:23:48.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8511" for this suite. Apr 10 13:23:55.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:23:55.147: INFO: namespace secrets-8511 deletion completed in 6.144384927s • [SLOW TEST:10.299 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:23:55.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d77b580f-5d88-4000-82ef-ce49553abef2 STEP: Creating a pod to test consume configMaps Apr 10 13:23:55.223: INFO: Waiting up to 5m0s for pod "pod-configmaps-4afa65cc-df8f-4767-b6ce-e03a005e64ee" in namespace "configmap-5553" to be "success or failure" Apr 10 13:23:55.254: INFO: Pod "pod-configmaps-4afa65cc-df8f-4767-b6ce-e03a005e64ee": Phase="Pending", Reason="", readiness=false. Elapsed: 31.073786ms Apr 10 13:23:57.259: INFO: Pod "pod-configmaps-4afa65cc-df8f-4767-b6ce-e03a005e64ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03538804s Apr 10 13:23:59.262: INFO: Pod "pod-configmaps-4afa65cc-df8f-4767-b6ce-e03a005e64ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039379051s STEP: Saw pod success Apr 10 13:23:59.263: INFO: Pod "pod-configmaps-4afa65cc-df8f-4767-b6ce-e03a005e64ee" satisfied condition "success or failure" Apr 10 13:23:59.265: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4afa65cc-df8f-4767-b6ce-e03a005e64ee container configmap-volume-test: STEP: delete the pod Apr 10 13:23:59.298: INFO: Waiting for pod pod-configmaps-4afa65cc-df8f-4767-b6ce-e03a005e64ee to disappear Apr 10 13:23:59.303: INFO: Pod pod-configmaps-4afa65cc-df8f-4767-b6ce-e03a005e64ee no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:23:59.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5553" for this suite. Apr 10 13:24:05.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:24:05.396: INFO: namespace configmap-5553 deletion completed in 6.089507939s • [SLOW TEST:10.249 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:24:05.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 10 13:24:09.996: INFO: Successfully updated pod "pod-update-fa1e1be7-916a-42ce-996a-7f720c0c03f0" STEP: verifying the updated pod is in kubernetes Apr 10 13:24:10.001: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:24:10.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8537" for this suite. Apr 10 13:24:32.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:24:32.123: INFO: namespace pods-8537 deletion completed in 22.118096419s • [SLOW TEST:26.726 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:24:32.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-26b3b5ed-9d32-4988-9f44-9fe556898d11 STEP: Creating a pod to test consume configMaps Apr 10 13:24:32.185: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d4467e69-be7a-49d7-9f3d-dfdc9d95dfd8" in namespace "projected-2417" to be "success or failure" Apr 10 13:24:32.237: INFO: Pod "pod-projected-configmaps-d4467e69-be7a-49d7-9f3d-dfdc9d95dfd8": Phase="Pending", Reason="", readiness=false. Elapsed: 51.755884ms Apr 10 13:24:34.241: INFO: Pod "pod-projected-configmaps-d4467e69-be7a-49d7-9f3d-dfdc9d95dfd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056366248s Apr 10 13:24:36.245: INFO: Pod "pod-projected-configmaps-d4467e69-be7a-49d7-9f3d-dfdc9d95dfd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06035368s STEP: Saw pod success Apr 10 13:24:36.245: INFO: Pod "pod-projected-configmaps-d4467e69-be7a-49d7-9f3d-dfdc9d95dfd8" satisfied condition "success or failure" Apr 10 13:24:36.248: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-d4467e69-be7a-49d7-9f3d-dfdc9d95dfd8 container projected-configmap-volume-test: STEP: delete the pod Apr 10 13:24:36.283: INFO: Waiting for pod pod-projected-configmaps-d4467e69-be7a-49d7-9f3d-dfdc9d95dfd8 to disappear Apr 10 13:24:36.308: INFO: Pod pod-projected-configmaps-d4467e69-be7a-49d7-9f3d-dfdc9d95dfd8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:24:36.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2417" for this suite. Apr 10 13:24:42.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:24:42.401: INFO: namespace projected-2417 deletion completed in 6.089298196s • [SLOW TEST:10.278 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:24:42.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 13:24:42.442: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 10 13:24:44.554: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:24:45.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-203" for this suite. Apr 10 13:24:51.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:24:51.892: INFO: namespace replication-controller-203 deletion completed in 6.232927733s • [SLOW TEST:9.490 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:24:51.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 10 13:24:51.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7153' Apr 10 13:24:54.573: INFO: stderr: "" Apr 10 13:24:54.574: INFO: stdout: "pod/pause created\n" Apr 10 13:24:54.574: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 10 13:24:54.574: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7153" to be "running and ready" Apr 10 13:24:54.586: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.75028ms Apr 10 13:24:56.598: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024604894s Apr 10 13:24:58.603: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.028913419s Apr 10 13:24:58.603: INFO: Pod "pause" satisfied condition "running and ready" Apr 10 13:24:58.603: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 10 13:24:58.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7153' Apr 10 13:24:58.707: INFO: stderr: "" Apr 10 13:24:58.707: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 10 13:24:58.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7153' Apr 10 13:24:58.801: INFO: stderr: "" Apr 10 13:24:58.801: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 10 13:24:58.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7153' Apr 10 13:24:58.915: INFO: stderr: "" Apr 10 13:24:58.915: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 10 13:24:58.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7153' Apr 10 13:24:59.016: INFO: stderr: "" Apr 10 13:24:59.016: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 10 13:24:59.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7153' Apr 10 13:24:59.128: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 13:24:59.128: INFO: stdout: "pod \"pause\" force deleted\n" Apr 10 13:24:59.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7153' Apr 10 13:24:59.311: INFO: stderr: "No resources found.\n" Apr 10 13:24:59.311: INFO: stdout: "" Apr 10 13:24:59.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7153 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 13:24:59.406: INFO: stderr: "" Apr 10 13:24:59.406: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:24:59.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7153" for this suite. Apr 10 13:25:05.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:25:05.504: INFO: namespace kubectl-7153 deletion completed in 6.093745681s • [SLOW TEST:13.612 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:25:05.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 10 13:25:12.168: INFO: Successfully updated pod "labelsupdate2e9e15b8-0fe6-41f5-b31c-6f24220727ac" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:25:14.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1708" for this suite. Apr 10 13:25:36.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:25:36.345: INFO: namespace downward-api-1708 deletion completed in 22.091815395s • [SLOW TEST:30.841 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:25:36.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 10 13:25:36.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-157' Apr 10 13:25:36.488: INFO: stderr: "" Apr 10 13:25:36.488: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 10 13:25:41.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-157 -o json' Apr 10 13:25:41.631: INFO: stderr: "" Apr 10 13:25:41.631: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-10T13:25:36Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-157\",\n \"resourceVersion\": \"4664824\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-157/pods/e2e-test-nginx-pod\",\n \"uid\": \"dc2a0057-3b21-4748-ab18-6b88eb56764a\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-4kb7x\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-4kb7x\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-4kb7x\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-10T13:25:36Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-10T13:25:39Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-10T13:25:39Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-10T13:25:36Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://71185b4135e2eaab700ac72ba9316937f4c90555da72db9cb1ed8eeeb06be56a\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-10T13:25:38Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.111\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-10T13:25:36Z\"\n }\n}\n" STEP: replace the image in the pod Apr 10 13:25:41.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-157' Apr 10 13:25:41.879: INFO: stderr: "" Apr 10 13:25:41.879: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 10 13:25:41.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-157' Apr 10 13:25:44.946: INFO: stderr: "" Apr 10 13:25:44.946: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:25:44.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-157" for this suite. Apr 10 13:25:50.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:25:51.079: INFO: namespace kubectl-157 deletion completed in 6.113221664s • [SLOW TEST:14.733 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:25:51.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 13:25:57.206: INFO: DNS probes using dns-997/dns-test-23f33dde-6158-4752-b3a5-a46a41de77a1 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:25:57.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-997" for this suite. Apr 10 13:26:03.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:26:03.409: INFO: namespace dns-997 deletion completed in 6.139512736s • [SLOW TEST:12.330 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:26:03.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 10 13:26:03.499: INFO: Waiting up to 5m0s for pod "pod-8baf57e9-9af0-4025-bc2f-6cf56776d424" in namespace "emptydir-8975" to be "success or failure" Apr 10 13:26:03.508: INFO: Pod "pod-8baf57e9-9af0-4025-bc2f-6cf56776d424": Phase="Pending", Reason="", readiness=false. Elapsed: 9.646373ms Apr 10 13:26:05.512: INFO: Pod "pod-8baf57e9-9af0-4025-bc2f-6cf56776d424": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013815494s Apr 10 13:26:07.517: INFO: Pod "pod-8baf57e9-9af0-4025-bc2f-6cf56776d424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01819692s STEP: Saw pod success Apr 10 13:26:07.517: INFO: Pod "pod-8baf57e9-9af0-4025-bc2f-6cf56776d424" satisfied condition "success or failure" Apr 10 13:26:07.519: INFO: Trying to get logs from node iruya-worker pod pod-8baf57e9-9af0-4025-bc2f-6cf56776d424 container test-container: STEP: delete the pod Apr 10 13:26:07.534: INFO: Waiting for pod pod-8baf57e9-9af0-4025-bc2f-6cf56776d424 to disappear Apr 10 13:26:07.539: INFO: Pod pod-8baf57e9-9af0-4025-bc2f-6cf56776d424 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:26:07.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8975" for this suite. Apr 10 13:26:13.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:26:13.643: INFO: namespace emptydir-8975 deletion completed in 6.084704998s • [SLOW TEST:10.233 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:26:13.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 13:26:13.838: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a761651-54fa-4a90-bf5f-f48d591b644d" in namespace "downward-api-2815" to be "success or failure" Apr 10 13:26:13.851: INFO: Pod "downwardapi-volume-2a761651-54fa-4a90-bf5f-f48d591b644d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.993171ms Apr 10 13:26:15.855: INFO: Pod "downwardapi-volume-2a761651-54fa-4a90-bf5f-f48d591b644d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016098712s Apr 10 13:26:17.858: INFO: Pod "downwardapi-volume-2a761651-54fa-4a90-bf5f-f48d591b644d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019175732s STEP: Saw pod success Apr 10 13:26:17.858: INFO: Pod "downwardapi-volume-2a761651-54fa-4a90-bf5f-f48d591b644d" satisfied condition "success or failure" Apr 10 13:26:17.859: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2a761651-54fa-4a90-bf5f-f48d591b644d container client-container: STEP: delete the pod Apr 10 13:26:17.876: INFO: Waiting for pod downwardapi-volume-2a761651-54fa-4a90-bf5f-f48d591b644d to disappear Apr 10 13:26:17.881: INFO: Pod downwardapi-volume-2a761651-54fa-4a90-bf5f-f48d591b644d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:26:17.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2815" for this suite. Apr 10 13:26:23.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:26:23.980: INFO: namespace downward-api-2815 deletion completed in 6.095570184s • [SLOW TEST:10.336 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:26:23.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-gf22t in namespace proxy-7440 I0410 13:26:24.115217 6 runners.go:180] Created replication controller with name: proxy-service-gf22t, namespace: proxy-7440, replica count: 1 I0410 13:26:25.165642 6 runners.go:180] proxy-service-gf22t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 13:26:26.165876 6 runners.go:180] proxy-service-gf22t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 13:26:27.166122 6 runners.go:180] proxy-service-gf22t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 13:26:28.166343 6 runners.go:180] proxy-service-gf22t Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 10 13:26:28.169: INFO: setup took 4.118513864s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 10 13:26:28.176: INFO: (0) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 6.064787ms) Apr 10 13:26:28.176: INFO: (0) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 6.49013ms) Apr 10 13:26:28.176: INFO: (0) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 6.707765ms) Apr 10 13:26:28.177: INFO: (0) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 7.78581ms) Apr 10 13:26:28.178: INFO: (0) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 8.026579ms) Apr 10 13:26:28.178: INFO: (0) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 8.175046ms) Apr 10 13:26:28.179: INFO: (0) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 9.570672ms) Apr 10 13:26:28.180: INFO: (0) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 10.099306ms) Apr 10 13:26:28.180: INFO: (0) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 10.578307ms) Apr 10 13:26:28.181: INFO: (0) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 11.064683ms) Apr 10 13:26:28.182: INFO: (0) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 12.393177ms) Apr 10 13:26:28.183: INFO: (0) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 13.895327ms) Apr 10 13:26:28.183: INFO: (0) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 13.873551ms) Apr 10 13:26:28.188: INFO: (0) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 18.740532ms) Apr 10 13:26:28.188: INFO: (0) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 18.809012ms) Apr 10 13:26:28.189: INFO: (0) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: test (200; 4.860885ms) Apr 10 13:26:28.194: INFO: (1) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 4.910333ms) Apr 10 13:26:28.194: INFO: (1) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 4.973605ms) Apr 10 13:26:28.194: INFO: (1) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 5.156307ms) Apr 10 13:26:28.194: INFO: (1) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 5.20195ms) Apr 10 13:26:28.194: INFO: (1) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 5.128194ms) Apr 10 13:26:28.194: INFO: (1) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 5.114586ms) Apr 10 13:26:28.195: INFO: (1) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 5.254367ms) Apr 10 13:26:28.195: INFO: (1) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 5.233619ms) Apr 10 13:26:28.195: INFO: (1) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 5.287445ms) Apr 10 13:26:28.197: INFO: (2) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 2.645498ms) Apr 10 13:26:28.198: INFO: (2) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 3.24637ms) Apr 10 13:26:28.198: INFO: (2) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 3.435719ms) Apr 10 13:26:28.198: INFO: (2) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 3.419041ms) Apr 10 13:26:28.198: INFO: (2) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 3.681105ms) Apr 10 13:26:28.200: INFO: (2) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.912432ms) Apr 10 13:26:28.200: INFO: (2) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 4.918006ms) Apr 10 13:26:28.200: INFO: (2) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 5.113109ms) Apr 10 13:26:28.200: INFO: (2) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 5.13659ms) Apr 10 13:26:28.201: INFO: (2) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 5.924348ms) Apr 10 13:26:28.201: INFO: (2) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 6.124368ms) Apr 10 13:26:28.201: INFO: (2) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 6.125097ms) Apr 10 13:26:28.201: INFO: (2) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: test (200; 3.756403ms) Apr 10 13:26:28.205: INFO: (3) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 3.719168ms) Apr 10 13:26:28.205: INFO: (3) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 3.865326ms) Apr 10 13:26:28.205: INFO: (3) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 3.87752ms) Apr 10 13:26:28.205: INFO: (3) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: test<... (200; 3.92193ms) Apr 10 13:26:28.205: INFO: (3) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.083706ms) Apr 10 13:26:28.205: INFO: (3) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 4.313181ms) Apr 10 13:26:28.206: INFO: (3) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 4.380267ms) Apr 10 13:26:28.206: INFO: (3) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 5.190721ms) Apr 10 13:26:28.207: INFO: (3) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 5.458749ms) Apr 10 13:26:28.207: INFO: (3) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 6.238741ms) Apr 10 13:26:28.208: INFO: (3) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 6.49331ms) Apr 10 13:26:28.208: INFO: (3) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 6.589168ms) Apr 10 13:26:28.212: INFO: (4) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 4.437343ms) Apr 10 13:26:28.212: INFO: (4) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 4.604765ms) Apr 10 13:26:28.213: INFO: (4) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 4.572695ms) Apr 10 13:26:28.213: INFO: (4) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 4.643286ms) Apr 10 13:26:28.213: INFO: (4) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 4.584863ms) Apr 10 13:26:28.213: INFO: (4) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 4.622135ms) Apr 10 13:26:28.213: INFO: (4) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.766214ms) Apr 10 13:26:28.213: INFO: (4) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 4.781716ms) Apr 10 13:26:28.213: INFO: (4) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 4.80886ms) Apr 10 13:26:28.213: INFO: (4) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 4.907324ms) Apr 10 13:26:28.213: INFO: (4) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.824139ms) Apr 10 13:26:28.213: INFO: (4) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 4.974432ms) Apr 10 13:26:28.214: INFO: (4) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 5.620042ms) Apr 10 13:26:28.214: INFO: (4) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 5.60714ms) Apr 10 13:26:28.214: INFO: (4) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 5.977832ms) Apr 10 13:26:28.214: INFO: (4) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: ... (200; 8.738569ms) Apr 10 13:26:28.223: INFO: (5) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 8.843296ms) Apr 10 13:26:28.223: INFO: (5) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 8.841781ms) Apr 10 13:26:28.223: INFO: (5) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 8.88922ms) Apr 10 13:26:28.223: INFO: (5) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 9.088715ms) Apr 10 13:26:28.223: INFO: (5) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 9.080631ms) Apr 10 13:26:28.224: INFO: (5) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 9.527113ms) Apr 10 13:26:28.224: INFO: (5) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: test<... (200; 9.664284ms) Apr 10 13:26:28.224: INFO: (5) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 10.020059ms) Apr 10 13:26:28.224: INFO: (5) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 10.167358ms) Apr 10 13:26:28.224: INFO: (5) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 10.19176ms) Apr 10 13:26:28.224: INFO: (5) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 10.338471ms) Apr 10 13:26:28.224: INFO: (5) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 10.232514ms) Apr 10 13:26:28.224: INFO: (5) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 10.205711ms) Apr 10 13:26:28.227: INFO: (6) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 2.675585ms) Apr 10 13:26:28.229: INFO: (6) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 4.083174ms) Apr 10 13:26:28.229: INFO: (6) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.206053ms) Apr 10 13:26:28.229: INFO: (6) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 4.227398ms) Apr 10 13:26:28.230: INFO: (6) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 5.077269ms) Apr 10 13:26:28.230: INFO: (6) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 5.161703ms) Apr 10 13:26:28.230: INFO: (6) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 5.458779ms) Apr 10 13:26:28.230: INFO: (6) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: test<... (200; 6.799572ms) Apr 10 13:26:28.238: INFO: (7) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 7.046011ms) Apr 10 13:26:28.238: INFO: (7) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 7.207576ms) Apr 10 13:26:28.238: INFO: (7) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 7.161607ms) Apr 10 13:26:28.239: INFO: (7) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 7.572674ms) Apr 10 13:26:28.239: INFO: (7) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 7.523932ms) Apr 10 13:26:28.239: INFO: (7) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 7.613792ms) Apr 10 13:26:28.239: INFO: (7) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 7.538287ms) Apr 10 13:26:28.239: INFO: (7) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 7.577397ms) Apr 10 13:26:28.239: INFO: (7) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 7.688227ms) Apr 10 13:26:28.239: INFO: (7) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 7.838409ms) Apr 10 13:26:28.242: INFO: (8) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 3.188545ms) Apr 10 13:26:28.243: INFO: (8) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 3.288824ms) Apr 10 13:26:28.243: INFO: (8) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 3.227843ms) Apr 10 13:26:28.244: INFO: (8) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 4.34787ms) Apr 10 13:26:28.244: INFO: (8) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 4.606863ms) Apr 10 13:26:28.244: INFO: (8) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 4.744001ms) Apr 10 13:26:28.244: INFO: (8) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 4.62654ms) Apr 10 13:26:28.244: INFO: (8) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 4.662023ms) Apr 10 13:26:28.244: INFO: (8) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: ... (200; 4.671154ms) Apr 10 13:26:28.244: INFO: (8) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 4.860518ms) Apr 10 13:26:28.244: INFO: (8) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 4.75384ms) Apr 10 13:26:28.244: INFO: (8) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 5.037774ms) Apr 10 13:26:28.244: INFO: (8) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 5.246023ms) Apr 10 13:26:28.245: INFO: (8) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 5.225704ms) Apr 10 13:26:28.245: INFO: (8) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 5.230161ms) Apr 10 13:26:28.247: INFO: (9) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 2.147198ms) Apr 10 13:26:28.247: INFO: (9) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 2.336779ms) Apr 10 13:26:28.249: INFO: (9) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 4.751482ms) Apr 10 13:26:28.250: INFO: (9) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 4.981376ms) Apr 10 13:26:28.250: INFO: (9) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 5.221834ms) Apr 10 13:26:28.250: INFO: (9) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 5.246339ms) Apr 10 13:26:28.250: INFO: (9) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: ... (200; 5.790947ms) Apr 10 13:26:28.251: INFO: (9) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 5.749742ms) Apr 10 13:26:28.252: INFO: (9) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 7.300056ms) Apr 10 13:26:28.252: INFO: (9) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 7.261062ms) Apr 10 13:26:28.252: INFO: (9) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 7.371119ms) Apr 10 13:26:28.252: INFO: (9) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 7.391165ms) Apr 10 13:26:28.252: INFO: (9) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 7.249319ms) Apr 10 13:26:28.255: INFO: (10) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 3.085174ms) Apr 10 13:26:28.256: INFO: (10) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 3.302543ms) Apr 10 13:26:28.256: INFO: (10) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: test (200; 3.464295ms) Apr 10 13:26:28.256: INFO: (10) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 3.510769ms) Apr 10 13:26:28.256: INFO: (10) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 3.800454ms) Apr 10 13:26:28.256: INFO: (10) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 4.112755ms) Apr 10 13:26:28.258: INFO: (10) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 5.638673ms) Apr 10 13:26:28.258: INFO: (10) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 5.708111ms) Apr 10 13:26:28.258: INFO: (10) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 5.754057ms) Apr 10 13:26:28.258: INFO: (10) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 5.751697ms) Apr 10 13:26:28.258: INFO: (10) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 5.718267ms) Apr 10 13:26:28.258: INFO: (10) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 5.993162ms) Apr 10 13:26:28.262: INFO: (11) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 3.336825ms) Apr 10 13:26:28.262: INFO: (11) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 3.42648ms) Apr 10 13:26:28.262: INFO: (11) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 3.776967ms) Apr 10 13:26:28.262: INFO: (11) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: test (200; 4.166751ms) Apr 10 13:26:28.263: INFO: (11) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 4.247192ms) Apr 10 13:26:28.263: INFO: (11) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 4.426569ms) Apr 10 13:26:28.263: INFO: (11) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.518536ms) Apr 10 13:26:28.263: INFO: (11) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 4.533443ms) Apr 10 13:26:28.263: INFO: (11) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 4.749749ms) Apr 10 13:26:28.263: INFO: (11) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 4.848091ms) Apr 10 13:26:28.263: INFO: (11) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 5.028491ms) Apr 10 13:26:28.263: INFO: (11) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 5.094955ms) Apr 10 13:26:28.263: INFO: (11) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 5.113948ms) Apr 10 13:26:28.263: INFO: (11) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 5.143422ms) Apr 10 13:26:28.266: INFO: (12) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 2.069547ms) Apr 10 13:26:28.266: INFO: (12) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 2.344537ms) Apr 10 13:26:28.266: INFO: (12) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 2.396745ms) Apr 10 13:26:28.268: INFO: (12) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 4.313421ms) Apr 10 13:26:28.269: INFO: (12) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 5.218494ms) Apr 10 13:26:28.269: INFO: (12) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 5.319077ms) Apr 10 13:26:28.269: INFO: (12) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 5.335129ms) Apr 10 13:26:28.269: INFO: (12) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 5.286191ms) Apr 10 13:26:28.269: INFO: (12) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 5.325307ms) Apr 10 13:26:28.269: INFO: (12) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 5.276437ms) Apr 10 13:26:28.269: INFO: (12) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 5.334171ms) Apr 10 13:26:28.269: INFO: (12) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 5.393618ms) Apr 10 13:26:28.269: INFO: (12) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: ... (200; 5.39015ms) Apr 10 13:26:28.269: INFO: (12) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 5.439466ms) Apr 10 13:26:28.271: INFO: (13) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 1.906044ms) Apr 10 13:26:28.273: INFO: (13) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 3.948988ms) Apr 10 13:26:28.274: INFO: (13) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 4.687185ms) Apr 10 13:26:28.274: INFO: (13) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 4.768558ms) Apr 10 13:26:28.274: INFO: (13) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.765643ms) Apr 10 13:26:28.274: INFO: (13) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: test<... (200; 4.737569ms) Apr 10 13:26:28.274: INFO: (13) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 4.791362ms) Apr 10 13:26:28.274: INFO: (13) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.772473ms) Apr 10 13:26:28.274: INFO: (13) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 4.732803ms) Apr 10 13:26:28.275: INFO: (13) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 5.556428ms) Apr 10 13:26:28.275: INFO: (13) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 5.824955ms) Apr 10 13:26:28.275: INFO: (13) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 5.746843ms) Apr 10 13:26:28.275: INFO: (13) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 5.837524ms) Apr 10 13:26:28.275: INFO: (13) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 5.772396ms) Apr 10 13:26:28.275: INFO: (13) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 5.944215ms) Apr 10 13:26:28.279: INFO: (14) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 4.154012ms) Apr 10 13:26:28.279: INFO: (14) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.140753ms) Apr 10 13:26:28.279: INFO: (14) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.20466ms) Apr 10 13:26:28.280: INFO: (14) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 4.281969ms) Apr 10 13:26:28.280: INFO: (14) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: ... (200; 4.335149ms) Apr 10 13:26:28.280: INFO: (14) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 4.371567ms) Apr 10 13:26:28.280: INFO: (14) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 4.497725ms) Apr 10 13:26:28.280: INFO: (14) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 4.416741ms) Apr 10 13:26:28.280: INFO: (14) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 5.130321ms) Apr 10 13:26:28.281: INFO: (14) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 5.33629ms) Apr 10 13:26:28.281: INFO: (14) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 5.567278ms) Apr 10 13:26:28.281: INFO: (14) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 5.864417ms) Apr 10 13:26:28.281: INFO: (14) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 5.881889ms) Apr 10 13:26:28.281: INFO: (14) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 5.922027ms) Apr 10 13:26:28.281: INFO: (14) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 6.13751ms) Apr 10 13:26:28.285: INFO: (15) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 3.457338ms) Apr 10 13:26:28.285: INFO: (15) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 3.510004ms) Apr 10 13:26:28.285: INFO: (15) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: ... (200; 4.278791ms) Apr 10 13:26:28.286: INFO: (15) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 4.304091ms) Apr 10 13:26:28.286: INFO: (15) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 4.382635ms) Apr 10 13:26:28.286: INFO: (15) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 4.219013ms) Apr 10 13:26:28.286: INFO: (15) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 4.450328ms) Apr 10 13:26:28.286: INFO: (15) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 4.789406ms) Apr 10 13:26:28.286: INFO: (15) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 4.951586ms) Apr 10 13:26:28.286: INFO: (15) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 4.960561ms) Apr 10 13:26:28.286: INFO: (15) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 5.051953ms) Apr 10 13:26:28.286: INFO: (15) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 5.01321ms) Apr 10 13:26:28.286: INFO: (15) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 5.019123ms) Apr 10 13:26:28.328: INFO: (16) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 41.734362ms) Apr 10 13:26:28.328: INFO: (16) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 41.7462ms) Apr 10 13:26:28.329: INFO: (16) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 42.627777ms) Apr 10 13:26:28.329: INFO: (16) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 42.556359ms) Apr 10 13:26:28.329: INFO: (16) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: test<... (200; 42.703311ms) Apr 10 13:26:28.330: INFO: (16) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 43.583986ms) Apr 10 13:26:28.330: INFO: (16) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 43.624115ms) Apr 10 13:26:28.330: INFO: (16) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 43.689257ms) Apr 10 13:26:28.330: INFO: (16) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 43.74098ms) Apr 10 13:26:28.330: INFO: (16) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 43.836458ms) Apr 10 13:26:28.330: INFO: (16) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 43.844893ms) Apr 10 13:26:28.330: INFO: (16) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 43.89575ms) Apr 10 13:26:28.330: INFO: (16) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 43.931296ms) Apr 10 13:26:28.331: INFO: (16) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 44.15348ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 8.295985ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 8.261655ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 8.413326ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 8.320831ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 8.46087ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 8.310341ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: test<... (200; 8.436588ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 8.401472ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname1/proxy/: tls baz (200; 8.335213ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 8.446035ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 8.464722ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 8.391549ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 8.409665ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 8.396562ms) Apr 10 13:26:28.339: INFO: (17) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 8.456924ms) Apr 10 13:26:28.343: INFO: (18) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 3.689429ms) Apr 10 13:26:28.343: INFO: (18) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname1/proxy/: foo (200; 4.192947ms) Apr 10 13:26:28.343: INFO: (18) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 4.189193ms) Apr 10 13:26:28.344: INFO: (18) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 4.438835ms) Apr 10 13:26:28.344: INFO: (18) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.633053ms) Apr 10 13:26:28.344: INFO: (18) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 4.604502ms) Apr 10 13:26:28.344: INFO: (18) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t/proxy/: test (200; 4.709326ms) Apr 10 13:26:28.344: INFO: (18) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:160/proxy/: foo (200; 4.634881ms) Apr 10 13:26:28.344: INFO: (18) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:460/proxy/: tls baz (200; 4.713592ms) Apr 10 13:26:28.344: INFO: (18) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:1080/proxy/: ... (200; 4.676382ms) Apr 10 13:26:28.345: INFO: (18) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname1/proxy/: foo (200; 5.150476ms) Apr 10 13:26:28.345: INFO: (18) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 5.253266ms) Apr 10 13:26:28.345: INFO: (18) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: ... (200; 2.896877ms) Apr 10 13:26:28.349: INFO: (19) /api/v1/namespaces/proxy-7440/pods/http:proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 4.143731ms) Apr 10 13:26:28.349: INFO: (19) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:1080/proxy/: test<... (200; 4.300289ms) Apr 10 13:26:28.349: INFO: (19) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:462/proxy/: tls qux (200; 4.41328ms) Apr 10 13:26:28.350: INFO: (19) /api/v1/namespaces/proxy-7440/pods/https:proxy-service-gf22t-8zc7t:443/proxy/: test (200; 6.166648ms) Apr 10 13:26:28.351: INFO: (19) /api/v1/namespaces/proxy-7440/services/https:proxy-service-gf22t:tlsportname2/proxy/: tls qux (200; 6.221853ms) Apr 10 13:26:28.351: INFO: (19) /api/v1/namespaces/proxy-7440/services/proxy-service-gf22t:portname2/proxy/: bar (200; 6.308032ms) Apr 10 13:26:28.351: INFO: (19) /api/v1/namespaces/proxy-7440/services/http:proxy-service-gf22t:portname2/proxy/: bar (200; 6.352166ms) Apr 10 13:26:28.352: INFO: (19) /api/v1/namespaces/proxy-7440/pods/proxy-service-gf22t-8zc7t:162/proxy/: bar (200; 6.720979ms) STEP: deleting ReplicationController proxy-service-gf22t in namespace proxy-7440, will wait for the garbage collector to delete the pods Apr 10 13:26:28.410: INFO: Deleting ReplicationController proxy-service-gf22t took: 6.431422ms Apr 10 13:26:28.710: INFO: Terminating ReplicationController proxy-service-gf22t pods took: 300.222782ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:26:41.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7440" for this suite. Apr 10 13:26:47.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:26:48.004: INFO: namespace proxy-7440 deletion completed in 6.088933579s • [SLOW TEST:24.024 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:26:48.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-tjb7 STEP: Creating a pod to test atomic-volume-subpath Apr 10 13:26:48.080: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tjb7" in namespace "subpath-6982" to be "success or failure" Apr 10 13:26:48.084: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394768ms Apr 10 13:26:50.088: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007886551s Apr 10 13:26:52.092: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Running", Reason="", readiness=true. Elapsed: 4.011904174s Apr 10 13:26:54.096: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Running", Reason="", readiness=true. Elapsed: 6.016238753s Apr 10 13:26:56.101: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Running", Reason="", readiness=true. Elapsed: 8.020840902s Apr 10 13:26:58.105: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Running", Reason="", readiness=true. Elapsed: 10.024913693s Apr 10 13:27:00.109: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Running", Reason="", readiness=true. Elapsed: 12.029669067s Apr 10 13:27:02.114: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Running", Reason="", readiness=true. Elapsed: 14.034040893s Apr 10 13:27:04.118: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Running", Reason="", readiness=true. Elapsed: 16.038724155s Apr 10 13:27:06.123: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Running", Reason="", readiness=true. Elapsed: 18.043191772s Apr 10 13:27:08.127: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Running", Reason="", readiness=true. Elapsed: 20.047476424s Apr 10 13:27:10.131: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Running", Reason="", readiness=true. Elapsed: 22.051833467s Apr 10 13:27:12.136: INFO: Pod "pod-subpath-test-configmap-tjb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05623019s STEP: Saw pod success Apr 10 13:27:12.136: INFO: Pod "pod-subpath-test-configmap-tjb7" satisfied condition "success or failure" Apr 10 13:27:12.139: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-tjb7 container test-container-subpath-configmap-tjb7: STEP: delete the pod Apr 10 13:27:12.158: INFO: Waiting for pod pod-subpath-test-configmap-tjb7 to disappear Apr 10 13:27:12.163: INFO: Pod pod-subpath-test-configmap-tjb7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-tjb7 Apr 10 13:27:12.163: INFO: Deleting pod "pod-subpath-test-configmap-tjb7" in namespace "subpath-6982" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:27:12.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6982" for this suite. Apr 10 13:27:18.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:27:18.266: INFO: namespace subpath-6982 deletion completed in 6.097860227s • [SLOW TEST:30.262 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:27:18.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 10 13:27:18.319: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:27:24.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4402" for this suite. Apr 10 13:27:30.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:27:30.227: INFO: namespace init-container-4402 deletion completed in 6.091978308s • [SLOW TEST:11.962 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:27:30.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5609 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 10 13:27:30.307: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 10 13:27:56.426: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.114:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5609 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:27:56.426: INFO: >>> kubeConfig: /root/.kube/config I0410 13:27:56.459323 6 log.go:172] (0xc0012dab00) (0xc00196bb80) Create stream I0410 13:27:56.459355 6 log.go:172] (0xc0012dab00) (0xc00196bb80) Stream added, broadcasting: 1 I0410 13:27:56.461842 6 log.go:172] (0xc0012dab00) Reply frame received for 1 I0410 13:27:56.461881 6 log.go:172] (0xc0012dab00) (0xc00196bc20) Create stream I0410 13:27:56.461896 6 log.go:172] (0xc0012dab00) (0xc00196bc20) Stream added, broadcasting: 3 I0410 13:27:56.462728 6 log.go:172] (0xc0012dab00) Reply frame received for 3 I0410 13:27:56.462766 6 log.go:172] (0xc0012dab00) (0xc002c86f00) Create stream I0410 13:27:56.462780 6 log.go:172] (0xc0012dab00) (0xc002c86f00) Stream added, broadcasting: 5 I0410 13:27:56.463881 6 log.go:172] (0xc0012dab00) Reply frame received for 5 I0410 13:27:56.553408 6 log.go:172] (0xc0012dab00) Data frame received for 5 I0410 13:27:56.553451 6 log.go:172] (0xc002c86f00) (5) Data frame handling I0410 13:27:56.553494 6 log.go:172] (0xc0012dab00) Data frame received for 3 I0410 13:27:56.553520 6 log.go:172] (0xc00196bc20) (3) Data frame handling I0410 13:27:56.553532 6 log.go:172] (0xc00196bc20) (3) Data frame sent I0410 13:27:56.553858 6 log.go:172] (0xc0012dab00) Data frame received for 3 I0410 13:27:56.553881 6 log.go:172] (0xc00196bc20) (3) Data frame handling I0410 13:27:56.555315 6 log.go:172] (0xc0012dab00) Data frame received for 1 I0410 13:27:56.555330 6 log.go:172] (0xc00196bb80) (1) Data frame handling I0410 13:27:56.555338 6 log.go:172] (0xc00196bb80) (1) Data frame sent I0410 13:27:56.555508 6 log.go:172] (0xc0012dab00) (0xc00196bb80) Stream removed, broadcasting: 1 I0410 13:27:56.555562 6 log.go:172] (0xc0012dab00) Go away received I0410 13:27:56.555685 6 log.go:172] (0xc0012dab00) (0xc00196bb80) Stream removed, broadcasting: 1 I0410 13:27:56.555706 6 log.go:172] (0xc0012dab00) (0xc00196bc20) Stream removed, broadcasting: 3 I0410 13:27:56.555715 6 log.go:172] (0xc0012dab00) (0xc002c86f00) Stream removed, broadcasting: 5 Apr 10 13:27:56.555: INFO: Found all expected endpoints: [netserver-0] Apr 10 13:27:56.559: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.160:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5609 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:27:56.559: INFO: >>> kubeConfig: /root/.kube/config I0410 13:27:56.597103 6 log.go:172] (0xc001d246e0) (0xc001e90960) Create stream I0410 13:27:56.597279 6 log.go:172] (0xc001d246e0) (0xc001e90960) Stream added, broadcasting: 1 I0410 13:27:56.599806 6 log.go:172] (0xc001d246e0) Reply frame received for 1 I0410 13:27:56.599855 6 log.go:172] (0xc001d246e0) (0xc0009d65a0) Create stream I0410 13:27:56.599876 6 log.go:172] (0xc001d246e0) (0xc0009d65a0) Stream added, broadcasting: 3 I0410 13:27:56.600840 6 log.go:172] (0xc001d246e0) Reply frame received for 3 I0410 13:27:56.600881 6 log.go:172] (0xc001d246e0) (0xc00196bd60) Create stream I0410 13:27:56.600906 6 log.go:172] (0xc001d246e0) (0xc00196bd60) Stream added, broadcasting: 5 I0410 13:27:56.602232 6 log.go:172] (0xc001d246e0) Reply frame received for 5 I0410 13:27:56.662434 6 log.go:172] (0xc001d246e0) Data frame received for 3 I0410 13:27:56.662464 6 log.go:172] (0xc0009d65a0) (3) Data frame handling I0410 13:27:56.662483 6 log.go:172] (0xc0009d65a0) (3) Data frame sent I0410 13:27:56.662494 6 log.go:172] (0xc001d246e0) Data frame received for 3 I0410 13:27:56.662507 6 log.go:172] (0xc0009d65a0) (3) Data frame handling I0410 13:27:56.662538 6 log.go:172] (0xc001d246e0) Data frame received for 5 I0410 13:27:56.662551 6 log.go:172] (0xc00196bd60) (5) Data frame handling I0410 13:27:56.664064 6 log.go:172] (0xc001d246e0) Data frame received for 1 I0410 13:27:56.664110 6 log.go:172] (0xc001e90960) (1) Data frame handling I0410 13:27:56.664137 6 log.go:172] (0xc001e90960) (1) Data frame sent I0410 13:27:56.664159 6 log.go:172] (0xc001d246e0) (0xc001e90960) Stream removed, broadcasting: 1 I0410 13:27:56.664179 6 log.go:172] (0xc001d246e0) Go away received I0410 13:27:56.664391 6 log.go:172] (0xc001d246e0) (0xc001e90960) Stream removed, broadcasting: 1 I0410 13:27:56.664432 6 log.go:172] (0xc001d246e0) (0xc0009d65a0) Stream removed, broadcasting: 3 I0410 13:27:56.664463 6 log.go:172] (0xc001d246e0) (0xc00196bd60) Stream removed, broadcasting: 5 Apr 10 13:27:56.664: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:27:56.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5609" for this suite. Apr 10 13:28:20.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:28:20.777: INFO: namespace pod-network-test-5609 deletion completed in 24.109493663s • [SLOW TEST:50.550 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:28:20.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-f179ba52-a21e-4481-803f-ba8e7f1b7c50 STEP: Creating a pod to test consume secrets Apr 10 13:28:20.886: INFO: Waiting up to 5m0s for pod "pod-secrets-e221f058-7562-4000-822c-05c5582bcec5" in namespace "secrets-4152" to be "success or failure" Apr 10 13:28:20.889: INFO: Pod "pod-secrets-e221f058-7562-4000-822c-05c5582bcec5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.367738ms Apr 10 13:28:22.892: INFO: Pod "pod-secrets-e221f058-7562-4000-822c-05c5582bcec5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006681732s Apr 10 13:28:24.896: INFO: Pod "pod-secrets-e221f058-7562-4000-822c-05c5582bcec5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010139754s STEP: Saw pod success Apr 10 13:28:24.896: INFO: Pod "pod-secrets-e221f058-7562-4000-822c-05c5582bcec5" satisfied condition "success or failure" Apr 10 13:28:24.898: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-e221f058-7562-4000-822c-05c5582bcec5 container secret-volume-test: STEP: delete the pod Apr 10 13:28:24.920: INFO: Waiting for pod pod-secrets-e221f058-7562-4000-822c-05c5582bcec5 to disappear Apr 10 13:28:24.940: INFO: Pod pod-secrets-e221f058-7562-4000-822c-05c5582bcec5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:28:24.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4152" for this suite. Apr 10 13:28:30.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:28:31.056: INFO: namespace secrets-4152 deletion completed in 6.113039613s • [SLOW TEST:10.278 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:28:31.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 10 13:28:35.691: INFO: Successfully updated pod "labelsupdate429b59fe-659b-4565-b0eb-ee4f0bd8d507" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:28:37.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3164" for this suite. Apr 10 13:28:59.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:28:59.898: INFO: namespace projected-3164 deletion completed in 22.13938554s • [SLOW TEST:28.841 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:28:59.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-e435cdf5-f717-4117-bd05-0a2f40608b2d STEP: Creating a pod to test consume configMaps Apr 10 13:28:59.964: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c2cc73e4-fd34-4ba0-9049-115ccd30f755" in namespace "projected-3363" to be "success or failure" Apr 10 13:28:59.967: INFO: Pod "pod-projected-configmaps-c2cc73e4-fd34-4ba0-9049-115ccd30f755": Phase="Pending", Reason="", readiness=false. Elapsed: 3.351257ms Apr 10 13:29:01.972: INFO: Pod "pod-projected-configmaps-c2cc73e4-fd34-4ba0-9049-115ccd30f755": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008042213s Apr 10 13:29:03.976: INFO: Pod "pod-projected-configmaps-c2cc73e4-fd34-4ba0-9049-115ccd30f755": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012287875s STEP: Saw pod success Apr 10 13:29:03.976: INFO: Pod "pod-projected-configmaps-c2cc73e4-fd34-4ba0-9049-115ccd30f755" satisfied condition "success or failure" Apr 10 13:29:03.979: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-c2cc73e4-fd34-4ba0-9049-115ccd30f755 container projected-configmap-volume-test: STEP: delete the pod Apr 10 13:29:04.014: INFO: Waiting for pod pod-projected-configmaps-c2cc73e4-fd34-4ba0-9049-115ccd30f755 to disappear Apr 10 13:29:04.027: INFO: Pod pod-projected-configmaps-c2cc73e4-fd34-4ba0-9049-115ccd30f755 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:29:04.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3363" for this suite. Apr 10 13:29:10.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:29:10.159: INFO: namespace projected-3363 deletion completed in 6.127780545s • [SLOW TEST:10.260 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:29:10.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-8194 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8194 STEP: Deleting pre-stop pod Apr 10 13:29:23.319: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:29:23.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8194" for this suite. Apr 10 13:30:01.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:30:01.439: INFO: namespace prestop-8194 deletion completed in 38.110486661s • [SLOW TEST:51.280 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:30:01.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-6ff56f68-db28-48de-8bc3-4618ed8b7cbd [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:30:01.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5376" for this suite. Apr 10 13:30:07.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:30:07.598: INFO: namespace configmap-5376 deletion completed in 6.078590053s • [SLOW TEST:6.159 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:30:07.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 10 13:30:07.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2506' Apr 10 13:30:07.881: INFO: stderr: "" Apr 10 13:30:07.881: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 13:30:07.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2506' Apr 10 13:30:07.981: INFO: stderr: "" Apr 10 13:30:07.981: INFO: stdout: "update-demo-nautilus-b45j2 update-demo-nautilus-f9fp8 " Apr 10 13:30:07.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b45j2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2506' Apr 10 13:30:08.068: INFO: stderr: "" Apr 10 13:30:08.068: INFO: stdout: "" Apr 10 13:30:08.068: INFO: update-demo-nautilus-b45j2 is created but not running Apr 10 13:30:13.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2506' Apr 10 13:30:13.178: INFO: stderr: "" Apr 10 13:30:13.178: INFO: stdout: "update-demo-nautilus-b45j2 update-demo-nautilus-f9fp8 " Apr 10 13:30:13.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b45j2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2506' Apr 10 13:30:13.272: INFO: stderr: "" Apr 10 13:30:13.272: INFO: stdout: "true" Apr 10 13:30:13.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b45j2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2506' Apr 10 13:30:13.360: INFO: stderr: "" Apr 10 13:30:13.360: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 13:30:13.360: INFO: validating pod update-demo-nautilus-b45j2 Apr 10 13:30:13.380: INFO: got data: { "image": "nautilus.jpg" } Apr 10 13:30:13.380: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 13:30:13.380: INFO: update-demo-nautilus-b45j2 is verified up and running Apr 10 13:30:13.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9fp8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2506' Apr 10 13:30:13.481: INFO: stderr: "" Apr 10 13:30:13.481: INFO: stdout: "true" Apr 10 13:30:13.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9fp8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2506' Apr 10 13:30:13.577: INFO: stderr: "" Apr 10 13:30:13.577: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 13:30:13.577: INFO: validating pod update-demo-nautilus-f9fp8 Apr 10 13:30:13.581: INFO: got data: { "image": "nautilus.jpg" } Apr 10 13:30:13.581: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 13:30:13.581: INFO: update-demo-nautilus-f9fp8 is verified up and running STEP: using delete to clean up resources Apr 10 13:30:13.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2506' Apr 10 13:30:13.706: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 13:30:13.707: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 10 13:30:13.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2506' Apr 10 13:30:13.826: INFO: stderr: "No resources found.\n" Apr 10 13:30:13.826: INFO: stdout: "" Apr 10 13:30:13.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2506 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 13:30:13.956: INFO: stderr: "" Apr 10 13:30:13.957: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:30:13.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2506" for this suite. Apr 10 13:30:35.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:30:36.054: INFO: namespace kubectl-2506 deletion completed in 22.093265872s • [SLOW TEST:28.455 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:30:36.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 10 13:30:36.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1547' Apr 10 13:30:36.412: INFO: stderr: "" Apr 10 13:30:36.412: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 13:30:36.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1547' Apr 10 13:30:36.553: INFO: stderr: "" Apr 10 13:30:36.553: INFO: stdout: "update-demo-nautilus-2vn7h update-demo-nautilus-nzd86 " Apr 10 13:30:36.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vn7h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1547' Apr 10 13:30:36.654: INFO: stderr: "" Apr 10 13:30:36.654: INFO: stdout: "" Apr 10 13:30:36.654: INFO: update-demo-nautilus-2vn7h is created but not running Apr 10 13:30:41.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1547' Apr 10 13:30:41.757: INFO: stderr: "" Apr 10 13:30:41.757: INFO: stdout: "update-demo-nautilus-2vn7h update-demo-nautilus-nzd86 " Apr 10 13:30:41.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vn7h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1547' Apr 10 13:30:41.847: INFO: stderr: "" Apr 10 13:30:41.847: INFO: stdout: "true" Apr 10 13:30:41.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vn7h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1547' Apr 10 13:30:41.938: INFO: stderr: "" Apr 10 13:30:41.938: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 13:30:41.938: INFO: validating pod update-demo-nautilus-2vn7h Apr 10 13:30:41.943: INFO: got data: { "image": "nautilus.jpg" } Apr 10 13:30:41.943: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 13:30:41.943: INFO: update-demo-nautilus-2vn7h is verified up and running Apr 10 13:30:41.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nzd86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1547' Apr 10 13:30:42.033: INFO: stderr: "" Apr 10 13:30:42.033: INFO: stdout: "true" Apr 10 13:30:42.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nzd86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1547' Apr 10 13:30:42.125: INFO: stderr: "" Apr 10 13:30:42.125: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 13:30:42.125: INFO: validating pod update-demo-nautilus-nzd86 Apr 10 13:30:42.129: INFO: got data: { "image": "nautilus.jpg" } Apr 10 13:30:42.129: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 13:30:42.129: INFO: update-demo-nautilus-nzd86 is verified up and running STEP: rolling-update to new replication controller Apr 10 13:30:42.131: INFO: scanned /root for discovery docs: Apr 10 13:30:42.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1547' Apr 10 13:31:04.747: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 10 13:31:04.747: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 13:31:04.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1547' Apr 10 13:31:04.845: INFO: stderr: "" Apr 10 13:31:04.845: INFO: stdout: "update-demo-kitten-nmbj8 update-demo-kitten-wrcgq " Apr 10 13:31:04.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nmbj8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1547' Apr 10 13:31:04.945: INFO: stderr: "" Apr 10 13:31:04.945: INFO: stdout: "true" Apr 10 13:31:04.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nmbj8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1547' Apr 10 13:31:05.034: INFO: stderr: "" Apr 10 13:31:05.034: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 10 13:31:05.034: INFO: validating pod update-demo-kitten-nmbj8 Apr 10 13:31:05.038: INFO: got data: { "image": "kitten.jpg" } Apr 10 13:31:05.038: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 10 13:31:05.038: INFO: update-demo-kitten-nmbj8 is verified up and running Apr 10 13:31:05.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wrcgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1547' Apr 10 13:31:05.126: INFO: stderr: "" Apr 10 13:31:05.126: INFO: stdout: "true" Apr 10 13:31:05.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wrcgq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1547' Apr 10 13:31:05.221: INFO: stderr: "" Apr 10 13:31:05.221: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 10 13:31:05.221: INFO: validating pod update-demo-kitten-wrcgq Apr 10 13:31:05.225: INFO: got data: { "image": "kitten.jpg" } Apr 10 13:31:05.225: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 10 13:31:05.225: INFO: update-demo-kitten-wrcgq is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:31:05.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1547" for this suite. Apr 10 13:31:27.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:31:27.305: INFO: namespace kubectl-1547 deletion completed in 22.077492491s • [SLOW TEST:51.251 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:31:27.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-37c50fbc-e6b9-4649-9f3c-a1f8528c2851 STEP: Creating a pod to test consume secrets Apr 10 13:31:27.386: INFO: Waiting up to 5m0s for pod "pod-secrets-38487818-d62b-4bfc-8eed-28199d63d90b" in namespace "secrets-2873" to be "success or failure" Apr 10 13:31:27.390: INFO: Pod "pod-secrets-38487818-d62b-4bfc-8eed-28199d63d90b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357122ms Apr 10 13:31:29.395: INFO: Pod "pod-secrets-38487818-d62b-4bfc-8eed-28199d63d90b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008694487s Apr 10 13:31:31.399: INFO: Pod "pod-secrets-38487818-d62b-4bfc-8eed-28199d63d90b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013077225s STEP: Saw pod success Apr 10 13:31:31.399: INFO: Pod "pod-secrets-38487818-d62b-4bfc-8eed-28199d63d90b" satisfied condition "success or failure" Apr 10 13:31:31.402: INFO: Trying to get logs from node iruya-worker pod pod-secrets-38487818-d62b-4bfc-8eed-28199d63d90b container secret-volume-test: STEP: delete the pod Apr 10 13:31:31.437: INFO: Waiting for pod pod-secrets-38487818-d62b-4bfc-8eed-28199d63d90b to disappear Apr 10 13:31:31.444: INFO: Pod pod-secrets-38487818-d62b-4bfc-8eed-28199d63d90b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:31:31.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2873" for this suite. Apr 10 13:31:37.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:31:37.539: INFO: namespace secrets-2873 deletion completed in 6.092970637s • [SLOW TEST:10.234 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:31:37.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 10 13:31:37.602: INFO: Waiting up to 5m0s for pod "downward-api-6165f610-a7af-46ef-9505-329bca6b938a" in namespace "downward-api-5278" to be "success or failure" Apr 10 13:31:37.606: INFO: Pod "downward-api-6165f610-a7af-46ef-9505-329bca6b938a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.690687ms Apr 10 13:31:39.611: INFO: Pod "downward-api-6165f610-a7af-46ef-9505-329bca6b938a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008355233s Apr 10 13:31:41.616: INFO: Pod "downward-api-6165f610-a7af-46ef-9505-329bca6b938a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01348404s STEP: Saw pod success Apr 10 13:31:41.616: INFO: Pod "downward-api-6165f610-a7af-46ef-9505-329bca6b938a" satisfied condition "success or failure" Apr 10 13:31:41.619: INFO: Trying to get logs from node iruya-worker2 pod downward-api-6165f610-a7af-46ef-9505-329bca6b938a container dapi-container: STEP: delete the pod Apr 10 13:31:41.645: INFO: Waiting for pod downward-api-6165f610-a7af-46ef-9505-329bca6b938a to disappear Apr 10 13:31:41.648: INFO: Pod downward-api-6165f610-a7af-46ef-9505-329bca6b938a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:31:41.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5278" for this suite. Apr 10 13:31:47.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:31:47.739: INFO: namespace downward-api-5278 deletion completed in 6.08684864s • [SLOW TEST:10.199 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:31:47.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 10 13:31:55.899: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 10 13:31:55.920: INFO: Pod pod-with-poststart-http-hook still exists Apr 10 13:31:57.920: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 10 13:31:57.924: INFO: Pod pod-with-poststart-http-hook still exists Apr 10 13:31:59.920: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 10 13:31:59.924: INFO: Pod pod-with-poststart-http-hook still exists Apr 10 13:32:01.920: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 10 13:32:01.937: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:32:01.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1752" for this suite. Apr 10 13:32:23.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:32:24.058: INFO: namespace container-lifecycle-hook-1752 deletion completed in 22.116389126s • [SLOW TEST:36.319 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:32:24.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 10 13:32:24.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4710' Apr 10 13:32:24.381: INFO: stderr: "" Apr 10 13:32:24.381: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 10 13:32:25.386: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:32:25.386: INFO: Found 0 / 1 Apr 10 13:32:26.393: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:32:26.393: INFO: Found 0 / 1 Apr 10 13:32:27.386: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:32:27.386: INFO: Found 0 / 1 Apr 10 13:32:28.386: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:32:28.386: INFO: Found 1 / 1 Apr 10 13:32:28.386: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 10 13:32:28.390: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:32:28.390: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 10 13:32:28.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-swttz redis-master --namespace=kubectl-4710' Apr 10 13:32:28.492: INFO: stderr: "" Apr 10 13:32:28.493: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 10 Apr 13:32:26.742 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Apr 13:32:26.743 # Server started, Redis version 3.2.12\n1:M 10 Apr 13:32:26.743 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Apr 13:32:26.743 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 10 13:32:28.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-swttz redis-master --namespace=kubectl-4710 --tail=1' Apr 10 13:32:28.595: INFO: stderr: "" Apr 10 13:32:28.596: INFO: stdout: "1:M 10 Apr 13:32:26.743 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 10 13:32:28.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-swttz redis-master --namespace=kubectl-4710 --limit-bytes=1' Apr 10 13:32:28.716: INFO: stderr: "" Apr 10 13:32:28.716: INFO: stdout: " " STEP: exposing timestamps Apr 10 13:32:28.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-swttz redis-master --namespace=kubectl-4710 --tail=1 --timestamps' Apr 10 13:32:28.820: INFO: stderr: "" Apr 10 13:32:28.820: INFO: stdout: "2020-04-10T13:32:26.743230501Z 1:M 10 Apr 13:32:26.743 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 10 13:32:31.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-swttz redis-master --namespace=kubectl-4710 --since=1s' Apr 10 13:32:31.438: INFO: stderr: "" Apr 10 13:32:31.438: INFO: stdout: "" Apr 10 13:32:31.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-swttz redis-master --namespace=kubectl-4710 --since=24h' Apr 10 13:32:31.543: INFO: stderr: "" Apr 10 13:32:31.543: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 10 Apr 13:32:26.742 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Apr 13:32:26.743 # Server started, Redis version 3.2.12\n1:M 10 Apr 13:32:26.743 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Apr 13:32:26.743 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 10 13:32:31.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4710' Apr 10 13:32:31.685: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 13:32:31.685: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 10 13:32:31.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-4710' Apr 10 13:32:31.801: INFO: stderr: "No resources found.\n" Apr 10 13:32:31.801: INFO: stdout: "" Apr 10 13:32:31.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-4710 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 13:32:31.901: INFO: stderr: "" Apr 10 13:32:31.901: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:32:31.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4710" for this suite. Apr 10 13:32:37.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:32:37.996: INFO: namespace kubectl-4710 deletion completed in 6.092125292s • [SLOW TEST:13.938 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:32:37.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:32:38.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1468" for this suite. Apr 10 13:32:44.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:32:44.232: INFO: namespace kubelet-test-1468 deletion completed in 6.104783529s • [SLOW TEST:6.235 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:32:44.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 13:32:44.311: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c7c95e9-6312-4c1a-808a-98a58626ae8a" in namespace "downward-api-4330" to be "success or failure" Apr 10 13:32:44.314: INFO: Pod "downwardapi-volume-1c7c95e9-6312-4c1a-808a-98a58626ae8a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.185866ms Apr 10 13:32:46.317: INFO: Pod "downwardapi-volume-1c7c95e9-6312-4c1a-808a-98a58626ae8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006675481s Apr 10 13:32:48.322: INFO: Pod "downwardapi-volume-1c7c95e9-6312-4c1a-808a-98a58626ae8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011047782s STEP: Saw pod success Apr 10 13:32:48.322: INFO: Pod "downwardapi-volume-1c7c95e9-6312-4c1a-808a-98a58626ae8a" satisfied condition "success or failure" Apr 10 13:32:48.324: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1c7c95e9-6312-4c1a-808a-98a58626ae8a container client-container: STEP: delete the pod Apr 10 13:32:48.375: INFO: Waiting for pod downwardapi-volume-1c7c95e9-6312-4c1a-808a-98a58626ae8a to disappear Apr 10 13:32:48.405: INFO: Pod downwardapi-volume-1c7c95e9-6312-4c1a-808a-98a58626ae8a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:32:48.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4330" for this suite. Apr 10 13:32:54.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:32:54.547: INFO: namespace downward-api-4330 deletion completed in 6.138742975s • [SLOW TEST:10.314 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:32:54.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 10 13:32:54.622: INFO: Waiting up to 5m0s for pod "pod-34b9b1a2-0317-4e81-946e-dd06622d02d8" in namespace "emptydir-600" to be "success or failure" Apr 10 13:32:54.625: INFO: Pod "pod-34b9b1a2-0317-4e81-946e-dd06622d02d8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.69969ms Apr 10 13:32:56.711: INFO: Pod "pod-34b9b1a2-0317-4e81-946e-dd06622d02d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089150611s Apr 10 13:32:58.715: INFO: Pod "pod-34b9b1a2-0317-4e81-946e-dd06622d02d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093388649s STEP: Saw pod success Apr 10 13:32:58.715: INFO: Pod "pod-34b9b1a2-0317-4e81-946e-dd06622d02d8" satisfied condition "success or failure" Apr 10 13:32:58.719: INFO: Trying to get logs from node iruya-worker2 pod pod-34b9b1a2-0317-4e81-946e-dd06622d02d8 container test-container: STEP: delete the pod Apr 10 13:32:58.735: INFO: Waiting for pod pod-34b9b1a2-0317-4e81-946e-dd06622d02d8 to disappear Apr 10 13:32:58.753: INFO: Pod pod-34b9b1a2-0317-4e81-946e-dd06622d02d8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:32:58.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-600" for this suite. Apr 10 13:33:04.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:33:04.853: INFO: namespace emptydir-600 deletion completed in 6.095843731s • [SLOW TEST:10.306 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:33:04.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 13:33:04.901: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:33:05.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8461" for this suite. Apr 10 13:33:12.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:33:12.127: INFO: namespace custom-resource-definition-8461 deletion completed in 6.128766912s • [SLOW TEST:7.274 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:33:12.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 10 13:33:16.732: INFO: Successfully updated pod "annotationupdate0c442a18-2790-46d0-b424-8da153943334" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:33:18.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6917" for this suite. Apr 10 13:33:40.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:33:40.866: INFO: namespace downward-api-6917 deletion completed in 22.08774471s • [SLOW TEST:28.739 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:33:40.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-ab0b1eb4-6b1c-43ff-a1dd-e703b1096931 STEP: Creating configMap with name cm-test-opt-upd-4589a68d-cf56-43b9-95fc-24e3acbb904a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ab0b1eb4-6b1c-43ff-a1dd-e703b1096931 STEP: Updating configmap cm-test-opt-upd-4589a68d-cf56-43b9-95fc-24e3acbb904a STEP: Creating configMap with name cm-test-opt-create-96bb803d-fb2c-4560-9cc3-11a2a8ad4474 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:33:49.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2282" for this suite. Apr 10 13:34:11.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:34:11.205: INFO: namespace configmap-2282 deletion completed in 22.104925542s • [SLOW TEST:30.339 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:34:11.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 10 13:34:11.236: INFO: namespace kubectl-9764 Apr 10 13:34:11.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9764' Apr 10 13:34:11.494: INFO: stderr: "" Apr 10 13:34:11.494: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 10 13:34:12.499: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:34:12.499: INFO: Found 0 / 1 Apr 10 13:34:13.498: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:34:13.498: INFO: Found 0 / 1 Apr 10 13:34:14.498: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:34:14.498: INFO: Found 0 / 1 Apr 10 13:34:15.498: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:34:15.498: INFO: Found 1 / 1 Apr 10 13:34:15.498: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 10 13:34:15.502: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:34:15.502: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 10 13:34:15.502: INFO: wait on redis-master startup in kubectl-9764 Apr 10 13:34:15.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2zw4v redis-master --namespace=kubectl-9764' Apr 10 13:34:15.605: INFO: stderr: "" Apr 10 13:34:15.605: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 10 Apr 13:34:13.994 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Apr 13:34:13.994 # Server started, Redis version 3.2.12\n1:M 10 Apr 13:34:13.994 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Apr 13:34:13.994 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 10 13:34:15.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9764' Apr 10 13:34:15.729: INFO: stderr: "" Apr 10 13:34:15.729: INFO: stdout: "service/rm2 exposed\n" Apr 10 13:34:15.741: INFO: Service rm2 in namespace kubectl-9764 found. STEP: exposing service Apr 10 13:34:17.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9764' Apr 10 13:34:17.862: INFO: stderr: "" Apr 10 13:34:17.862: INFO: stdout: "service/rm3 exposed\n" Apr 10 13:34:17.867: INFO: Service rm3 in namespace kubectl-9764 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:34:19.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9764" for this suite. Apr 10 13:34:41.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:34:41.967: INFO: namespace kubectl-9764 deletion completed in 22.088168409s • [SLOW TEST:30.761 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:34:41.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 10 13:34:42.017: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 10 13:34:42.026: INFO: Waiting for terminating namespaces to be deleted... Apr 10 13:34:42.029: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 10 13:34:42.034: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 10 13:34:42.034: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 13:34:42.034: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 10 13:34:42.034: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 13:34:42.034: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 10 13:34:42.047: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 10 13:34:42.047: INFO: Container coredns ready: true, restart count 0 Apr 10 13:34:42.047: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 10 13:34:42.047: INFO: Container coredns ready: true, restart count 0 Apr 10 13:34:42.047: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 10 13:34:42.047: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 13:34:42.047: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 10 13:34:42.047: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160478b3bb479898], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:34:43.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8286" for this suite. Apr 10 13:34:49.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:34:49.197: INFO: namespace sched-pred-8286 deletion completed in 6.11097702s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.230 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:34:49.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 10 13:34:49.278: INFO: Waiting up to 5m0s for pod "pod-7c1baced-1ad4-451c-a330-c39616e0cc10" in namespace "emptydir-8355" to be "success or failure" Apr 10 13:34:49.282: INFO: Pod "pod-7c1baced-1ad4-451c-a330-c39616e0cc10": Phase="Pending", Reason="", readiness=false. Elapsed: 3.848163ms Apr 10 13:34:51.285: INFO: Pod "pod-7c1baced-1ad4-451c-a330-c39616e0cc10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007326647s Apr 10 13:34:53.293: INFO: Pod "pod-7c1baced-1ad4-451c-a330-c39616e0cc10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015313723s STEP: Saw pod success Apr 10 13:34:53.293: INFO: Pod "pod-7c1baced-1ad4-451c-a330-c39616e0cc10" satisfied condition "success or failure" Apr 10 13:34:53.296: INFO: Trying to get logs from node iruya-worker pod pod-7c1baced-1ad4-451c-a330-c39616e0cc10 container test-container: STEP: delete the pod Apr 10 13:34:53.313: INFO: Waiting for pod pod-7c1baced-1ad4-451c-a330-c39616e0cc10 to disappear Apr 10 13:34:53.317: INFO: Pod pod-7c1baced-1ad4-451c-a330-c39616e0cc10 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:34:53.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8355" for this suite. Apr 10 13:34:59.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:34:59.420: INFO: namespace emptydir-8355 deletion completed in 6.099977419s • [SLOW TEST:10.223 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:34:59.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1195 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 10 13:34:59.487: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 10 13:35:23.599: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.127:8080/dial?request=hostName&protocol=udp&host=10.244.2.126&port=8081&tries=1'] Namespace:pod-network-test-1195 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:35:23.599: INFO: >>> kubeConfig: /root/.kube/config I0410 13:35:23.636035 6 log.go:172] (0xc000de2420) (0xc0020d2d20) Create stream I0410 13:35:23.636065 6 log.go:172] (0xc000de2420) (0xc0020d2d20) Stream added, broadcasting: 1 I0410 13:35:23.638846 6 log.go:172] (0xc000de2420) Reply frame received for 1 I0410 13:35:23.638900 6 log.go:172] (0xc000de2420) (0xc002b9dcc0) Create stream I0410 13:35:23.638916 6 log.go:172] (0xc000de2420) (0xc002b9dcc0) Stream added, broadcasting: 3 I0410 13:35:23.640026 6 log.go:172] (0xc000de2420) Reply frame received for 3 I0410 13:35:23.640072 6 log.go:172] (0xc000de2420) (0xc0020d2dc0) Create stream I0410 13:35:23.640088 6 log.go:172] (0xc000de2420) (0xc0020d2dc0) Stream added, broadcasting: 5 I0410 13:35:23.641277 6 log.go:172] (0xc000de2420) Reply frame received for 5 I0410 13:35:23.744744 6 log.go:172] (0xc000de2420) Data frame received for 3 I0410 13:35:23.744790 6 log.go:172] (0xc002b9dcc0) (3) Data frame handling I0410 13:35:23.744821 6 log.go:172] (0xc002b9dcc0) (3) Data frame sent I0410 13:35:23.746015 6 log.go:172] (0xc000de2420) Data frame received for 5 I0410 13:35:23.746062 6 log.go:172] (0xc0020d2dc0) (5) Data frame handling I0410 13:35:23.746096 6 log.go:172] (0xc000de2420) Data frame received for 3 I0410 13:35:23.746113 6 log.go:172] (0xc002b9dcc0) (3) Data frame handling I0410 13:35:23.747723 6 log.go:172] (0xc000de2420) Data frame received for 1 I0410 13:35:23.747794 6 log.go:172] (0xc0020d2d20) (1) Data frame handling I0410 13:35:23.747866 6 log.go:172] (0xc0020d2d20) (1) Data frame sent I0410 13:35:23.748279 6 log.go:172] (0xc000de2420) (0xc0020d2d20) Stream removed, broadcasting: 1 I0410 13:35:23.748319 6 log.go:172] (0xc000de2420) Go away received I0410 13:35:23.748395 6 log.go:172] (0xc000de2420) (0xc0020d2d20) Stream removed, broadcasting: 1 I0410 13:35:23.748412 6 log.go:172] (0xc000de2420) (0xc002b9dcc0) Stream removed, broadcasting: 3 I0410 13:35:23.748419 6 log.go:172] (0xc000de2420) (0xc0020d2dc0) Stream removed, broadcasting: 5 Apr 10 13:35:23.748: INFO: Waiting for endpoints: map[] Apr 10 13:35:23.751: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.127:8080/dial?request=hostName&protocol=udp&host=10.244.1.173&port=8081&tries=1'] Namespace:pod-network-test-1195 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:35:23.751: INFO: >>> kubeConfig: /root/.kube/config I0410 13:35:23.782970 6 log.go:172] (0xc000de3080) (0xc0020d2fa0) Create stream I0410 13:35:23.782991 6 log.go:172] (0xc000de3080) (0xc0020d2fa0) Stream added, broadcasting: 1 I0410 13:35:23.785842 6 log.go:172] (0xc000de3080) Reply frame received for 1 I0410 13:35:23.785887 6 log.go:172] (0xc000de3080) (0xc002b9dd60) Create stream I0410 13:35:23.785894 6 log.go:172] (0xc000de3080) (0xc002b9dd60) Stream added, broadcasting: 3 I0410 13:35:23.786923 6 log.go:172] (0xc000de3080) Reply frame received for 3 I0410 13:35:23.786984 6 log.go:172] (0xc000de3080) (0xc002b9de00) Create stream I0410 13:35:23.787007 6 log.go:172] (0xc000de3080) (0xc002b9de00) Stream added, broadcasting: 5 I0410 13:35:23.788051 6 log.go:172] (0xc000de3080) Reply frame received for 5 I0410 13:35:23.847455 6 log.go:172] (0xc000de3080) Data frame received for 3 I0410 13:35:23.847564 6 log.go:172] (0xc002b9dd60) (3) Data frame handling I0410 13:35:23.847611 6 log.go:172] (0xc002b9dd60) (3) Data frame sent I0410 13:35:23.847633 6 log.go:172] (0xc000de3080) Data frame received for 5 I0410 13:35:23.847683 6 log.go:172] (0xc002b9de00) (5) Data frame handling I0410 13:35:23.848113 6 log.go:172] (0xc000de3080) Data frame received for 3 I0410 13:35:23.848129 6 log.go:172] (0xc002b9dd60) (3) Data frame handling I0410 13:35:23.849956 6 log.go:172] (0xc000de3080) Data frame received for 1 I0410 13:35:23.849975 6 log.go:172] (0xc0020d2fa0) (1) Data frame handling I0410 13:35:23.849986 6 log.go:172] (0xc0020d2fa0) (1) Data frame sent I0410 13:35:23.849997 6 log.go:172] (0xc000de3080) (0xc0020d2fa0) Stream removed, broadcasting: 1 I0410 13:35:23.850080 6 log.go:172] (0xc000de3080) (0xc0020d2fa0) Stream removed, broadcasting: 1 I0410 13:35:23.850090 6 log.go:172] (0xc000de3080) (0xc002b9dd60) Stream removed, broadcasting: 3 I0410 13:35:23.850203 6 log.go:172] (0xc000de3080) (0xc002b9de00) Stream removed, broadcasting: 5 I0410 13:35:23.850231 6 log.go:172] (0xc000de3080) Go away received Apr 10 13:35:23.850: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:35:23.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1195" for this suite. Apr 10 13:35:45.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:35:45.957: INFO: namespace pod-network-test-1195 deletion completed in 22.102391269s • [SLOW TEST:46.536 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:35:45.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 10 13:35:46.045: INFO: Waiting up to 5m0s for pod "var-expansion-ef288fca-c16f-4c6f-b705-4eccb007cf76" in namespace "var-expansion-9834" to be "success or failure" Apr 10 13:35:46.061: INFO: Pod "var-expansion-ef288fca-c16f-4c6f-b705-4eccb007cf76": Phase="Pending", Reason="", readiness=false. Elapsed: 15.811946ms Apr 10 13:35:48.066: INFO: Pod "var-expansion-ef288fca-c16f-4c6f-b705-4eccb007cf76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020162355s Apr 10 13:35:50.070: INFO: Pod "var-expansion-ef288fca-c16f-4c6f-b705-4eccb007cf76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024617073s STEP: Saw pod success Apr 10 13:35:50.070: INFO: Pod "var-expansion-ef288fca-c16f-4c6f-b705-4eccb007cf76" satisfied condition "success or failure" Apr 10 13:35:50.073: INFO: Trying to get logs from node iruya-worker pod var-expansion-ef288fca-c16f-4c6f-b705-4eccb007cf76 container dapi-container: STEP: delete the pod Apr 10 13:35:50.106: INFO: Waiting for pod var-expansion-ef288fca-c16f-4c6f-b705-4eccb007cf76 to disappear Apr 10 13:35:50.132: INFO: Pod var-expansion-ef288fca-c16f-4c6f-b705-4eccb007cf76 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:35:50.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9834" for this suite. Apr 10 13:35:56.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:35:56.224: INFO: namespace var-expansion-9834 deletion completed in 6.088436993s • [SLOW TEST:10.266 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:35:56.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 13:35:56.304: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.548562ms) Apr 10 13:35:56.308: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.398415ms) Apr 10 13:35:56.311: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.045421ms) Apr 10 13:35:56.314: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.654388ms) Apr 10 13:35:56.318: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.928431ms) Apr 10 13:35:56.322: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.56641ms) Apr 10 13:35:56.325: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.292582ms) Apr 10 13:35:56.328: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.749828ms) Apr 10 13:35:56.331: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.10785ms) Apr 10 13:35:56.334: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.822849ms) Apr 10 13:35:56.337: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.497689ms) Apr 10 13:35:56.340: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.921634ms) Apr 10 13:35:56.342: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.905955ms) Apr 10 13:35:56.345: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.009698ms) Apr 10 13:35:56.348: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.938079ms) Apr 10 13:35:56.352: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.076329ms) Apr 10 13:35:56.355: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.532868ms) Apr 10 13:35:56.378: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 23.288004ms) Apr 10 13:35:56.383: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.159066ms) Apr 10 13:35:56.386: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.754114ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:35:56.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6991" for this suite. Apr 10 13:36:02.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:36:02.474: INFO: namespace proxy-6991 deletion completed in 6.083490971s • [SLOW TEST:6.250 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:36:02.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 10 13:36:06.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-779337f8-6a21-42be-87b9-e8625c800a45 -c busybox-main-container --namespace=emptydir-1314 -- cat /usr/share/volumeshare/shareddata.txt' Apr 10 13:36:09.070: INFO: stderr: "I0410 13:36:08.973264 2242 log.go:172] (0xc000776420) (0xc000778960) Create stream\nI0410 13:36:08.973299 2242 log.go:172] (0xc000776420) (0xc000778960) Stream added, broadcasting: 1\nI0410 13:36:08.975587 2242 log.go:172] (0xc000776420) Reply frame received for 1\nI0410 13:36:08.975636 2242 log.go:172] (0xc000776420) (0xc00077c0a0) Create stream\nI0410 13:36:08.975654 2242 log.go:172] (0xc000776420) (0xc00077c0a0) Stream added, broadcasting: 3\nI0410 13:36:08.976683 2242 log.go:172] (0xc000776420) Reply frame received for 3\nI0410 13:36:08.976728 2242 log.go:172] (0xc000776420) (0xc00088c000) Create stream\nI0410 13:36:08.976751 2242 log.go:172] (0xc000776420) (0xc00088c000) Stream added, broadcasting: 5\nI0410 13:36:08.977876 2242 log.go:172] (0xc000776420) Reply frame received for 5\nI0410 13:36:09.065637 2242 log.go:172] (0xc000776420) Data frame received for 5\nI0410 13:36:09.065673 2242 log.go:172] (0xc00088c000) (5) Data frame handling\nI0410 13:36:09.065692 2242 log.go:172] (0xc000776420) Data frame received for 3\nI0410 13:36:09.065700 2242 log.go:172] (0xc00077c0a0) (3) Data frame handling\nI0410 13:36:09.065709 2242 log.go:172] (0xc00077c0a0) (3) Data frame sent\nI0410 13:36:09.065716 2242 log.go:172] (0xc000776420) Data frame received for 3\nI0410 13:36:09.065722 2242 log.go:172] (0xc00077c0a0) (3) Data frame handling\nI0410 13:36:09.066745 2242 log.go:172] (0xc000776420) Data frame received for 1\nI0410 13:36:09.066767 2242 log.go:172] (0xc000778960) (1) Data frame handling\nI0410 13:36:09.066781 2242 log.go:172] (0xc000778960) (1) Data frame sent\nI0410 13:36:09.066793 2242 log.go:172] (0xc000776420) (0xc000778960) Stream removed, broadcasting: 1\nI0410 13:36:09.066809 2242 log.go:172] (0xc000776420) Go away received\nI0410 13:36:09.067103 2242 log.go:172] (0xc000776420) (0xc000778960) Stream removed, broadcasting: 1\nI0410 13:36:09.067116 2242 log.go:172] (0xc000776420) (0xc00077c0a0) Stream removed, broadcasting: 3\nI0410 13:36:09.067121 2242 log.go:172] (0xc000776420) (0xc00088c000) Stream removed, broadcasting: 5\n" Apr 10 13:36:09.070: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:36:09.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1314" for this suite. Apr 10 13:36:15.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:36:15.167: INFO: namespace emptydir-1314 deletion completed in 6.093495612s • [SLOW TEST:12.692 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:36:15.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 10 13:36:15.280: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7781,SelfLink:/api/v1/namespaces/watch-7781/configmaps/e2e-watch-test-resource-version,UID:4b43baa8-85ad-4cf1-b0d5-aaa834eddce5,ResourceVersion:4667136,Generation:0,CreationTimestamp:2020-04-10 13:36:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 10 13:36:15.280: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7781,SelfLink:/api/v1/namespaces/watch-7781/configmaps/e2e-watch-test-resource-version,UID:4b43baa8-85ad-4cf1-b0d5-aaa834eddce5,ResourceVersion:4667137,Generation:0,CreationTimestamp:2020-04-10 13:36:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:36:15.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7781" for this suite. Apr 10 13:36:21.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:36:21.384: INFO: namespace watch-7781 deletion completed in 6.084763446s • [SLOW TEST:6.217 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:36:21.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-8a177e28-2d80-479f-9671-98414606f9bb STEP: Creating a pod to test consume secrets Apr 10 13:36:21.447: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ea813535-18f3-495f-bfdd-357d60114f0b" in namespace "projected-2927" to be "success or failure" Apr 10 13:36:21.451: INFO: Pod "pod-projected-secrets-ea813535-18f3-495f-bfdd-357d60114f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.335974ms Apr 10 13:36:23.455: INFO: Pod "pod-projected-secrets-ea813535-18f3-495f-bfdd-357d60114f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007876714s Apr 10 13:36:25.462: INFO: Pod "pod-projected-secrets-ea813535-18f3-495f-bfdd-357d60114f0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014511538s STEP: Saw pod success Apr 10 13:36:25.462: INFO: Pod "pod-projected-secrets-ea813535-18f3-495f-bfdd-357d60114f0b" satisfied condition "success or failure" Apr 10 13:36:25.465: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-ea813535-18f3-495f-bfdd-357d60114f0b container projected-secret-volume-test: STEP: delete the pod Apr 10 13:36:25.498: INFO: Waiting for pod pod-projected-secrets-ea813535-18f3-495f-bfdd-357d60114f0b to disappear Apr 10 13:36:25.511: INFO: Pod pod-projected-secrets-ea813535-18f3-495f-bfdd-357d60114f0b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:36:25.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2927" for this suite. Apr 10 13:36:31.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:36:31.623: INFO: namespace projected-2927 deletion completed in 6.108389507s • [SLOW TEST:10.239 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:36:31.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 10 13:36:31.695: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-a,UID:496f0f1a-0764-430f-afc4-9b6e05215f27,ResourceVersion:4667205,Generation:0,CreationTimestamp:2020-04-10 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 10 13:36:31.696: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-a,UID:496f0f1a-0764-430f-afc4-9b6e05215f27,ResourceVersion:4667205,Generation:0,CreationTimestamp:2020-04-10 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 10 13:36:41.703: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-a,UID:496f0f1a-0764-430f-afc4-9b6e05215f27,ResourceVersion:4667225,Generation:0,CreationTimestamp:2020-04-10 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 10 13:36:41.703: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-a,UID:496f0f1a-0764-430f-afc4-9b6e05215f27,ResourceVersion:4667225,Generation:0,CreationTimestamp:2020-04-10 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 10 13:36:51.728: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-a,UID:496f0f1a-0764-430f-afc4-9b6e05215f27,ResourceVersion:4667246,Generation:0,CreationTimestamp:2020-04-10 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 10 13:36:51.728: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-a,UID:496f0f1a-0764-430f-afc4-9b6e05215f27,ResourceVersion:4667246,Generation:0,CreationTimestamp:2020-04-10 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 10 13:37:01.752: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-a,UID:496f0f1a-0764-430f-afc4-9b6e05215f27,ResourceVersion:4667267,Generation:0,CreationTimestamp:2020-04-10 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 10 13:37:01.752: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-a,UID:496f0f1a-0764-430f-afc4-9b6e05215f27,ResourceVersion:4667267,Generation:0,CreationTimestamp:2020-04-10 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 10 13:37:11.759: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-b,UID:ab88f4d8-ccd9-4aeb-97e7-f8322975c14f,ResourceVersion:4667288,Generation:0,CreationTimestamp:2020-04-10 13:37:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 10 13:37:11.759: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-b,UID:ab88f4d8-ccd9-4aeb-97e7-f8322975c14f,ResourceVersion:4667288,Generation:0,CreationTimestamp:2020-04-10 13:37:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 10 13:37:21.765: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-b,UID:ab88f4d8-ccd9-4aeb-97e7-f8322975c14f,ResourceVersion:4667308,Generation:0,CreationTimestamp:2020-04-10 13:37:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 10 13:37:21.765: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3396,SelfLink:/api/v1/namespaces/watch-3396/configmaps/e2e-watch-test-configmap-b,UID:ab88f4d8-ccd9-4aeb-97e7-f8322975c14f,ResourceVersion:4667308,Generation:0,CreationTimestamp:2020-04-10 13:37:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:37:31.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3396" for this suite. Apr 10 13:37:37.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:37:37.911: INFO: namespace watch-3396 deletion completed in 6.139732968s • [SLOW TEST:66.288 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:37:37.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 13:37:37.941: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:37:41.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9290" for this suite. Apr 10 13:38:22.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:38:22.128: INFO: namespace pods-9290 deletion completed in 40.130414544s • [SLOW TEST:44.216 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:38:22.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 10 13:38:22.191: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 10 13:38:22.199: INFO: Waiting for terminating namespaces to be deleted... Apr 10 13:38:22.201: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 10 13:38:22.206: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 10 13:38:22.206: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 13:38:22.206: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 10 13:38:22.206: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 13:38:22.206: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 10 13:38:22.212: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 10 13:38:22.212: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 13:38:22.212: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 10 13:38:22.212: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 13:38:22.212: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 10 13:38:22.212: INFO: Container coredns ready: true, restart count 0 Apr 10 13:38:22.212: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 10 13:38:22.212: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-60768bc5-24f9-4388-b605-0c86e6bfc4ac 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-60768bc5-24f9-4388-b605-0c86e6bfc4ac off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-60768bc5-24f9-4388-b605-0c86e6bfc4ac [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:38:30.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9857" for this suite. Apr 10 13:38:48.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:38:48.422: INFO: namespace sched-pred-9857 deletion completed in 18.081267745s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.294 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:38:48.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 10 13:38:48.490: INFO: Waiting up to 5m0s for pod "downward-api-3e89ada9-ffb8-46d2-a718-619f9706d12d" in namespace "downward-api-3768" to be "success or failure" Apr 10 13:38:48.498: INFO: Pod "downward-api-3e89ada9-ffb8-46d2-a718-619f9706d12d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.498816ms Apr 10 13:38:50.506: INFO: Pod "downward-api-3e89ada9-ffb8-46d2-a718-619f9706d12d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01530801s Apr 10 13:38:52.517: INFO: Pod "downward-api-3e89ada9-ffb8-46d2-a718-619f9706d12d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026856304s STEP: Saw pod success Apr 10 13:38:52.517: INFO: Pod "downward-api-3e89ada9-ffb8-46d2-a718-619f9706d12d" satisfied condition "success or failure" Apr 10 13:38:52.519: INFO: Trying to get logs from node iruya-worker2 pod downward-api-3e89ada9-ffb8-46d2-a718-619f9706d12d container dapi-container: STEP: delete the pod Apr 10 13:38:52.552: INFO: Waiting for pod downward-api-3e89ada9-ffb8-46d2-a718-619f9706d12d to disappear Apr 10 13:38:52.557: INFO: Pod downward-api-3e89ada9-ffb8-46d2-a718-619f9706d12d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:38:52.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3768" for this suite. Apr 10 13:38:58.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:38:58.652: INFO: namespace downward-api-3768 deletion completed in 6.091531668s • [SLOW TEST:10.230 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:38:58.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 10 13:38:59.236: INFO: created pod pod-service-account-defaultsa Apr 10 13:38:59.236: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 10 13:38:59.253: INFO: created pod pod-service-account-mountsa Apr 10 13:38:59.253: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 10 13:38:59.264: INFO: created pod pod-service-account-nomountsa Apr 10 13:38:59.264: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 10 13:38:59.345: INFO: created pod pod-service-account-defaultsa-mountspec Apr 10 13:38:59.345: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 10 13:38:59.372: INFO: created pod pod-service-account-mountsa-mountspec Apr 10 13:38:59.372: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 10 13:38:59.402: INFO: created pod pod-service-account-nomountsa-mountspec Apr 10 13:38:59.402: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 10 13:38:59.419: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 10 13:38:59.419: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 10 13:38:59.518: INFO: created pod pod-service-account-mountsa-nomountspec Apr 10 13:38:59.518: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 10 13:38:59.522: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 10 13:38:59.522: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:38:59.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9580" for this suite. Apr 10 13:39:25.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:39:25.719: INFO: namespace svcaccounts-9580 deletion completed in 26.176104806s • [SLOW TEST:27.066 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:39:25.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:39:29.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6078" for this suite. Apr 10 13:40:15.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:40:15.924: INFO: namespace kubelet-test-6078 deletion completed in 46.094032758s • [SLOW TEST:50.204 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:40:15.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 13:40:16.027: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 10 13:40:16.042: INFO: Number of nodes with available pods: 0 Apr 10 13:40:16.043: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 10 13:40:16.091: INFO: Number of nodes with available pods: 0 Apr 10 13:40:16.091: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:17.095: INFO: Number of nodes with available pods: 0 Apr 10 13:40:17.095: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:18.095: INFO: Number of nodes with available pods: 0 Apr 10 13:40:18.095: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:19.096: INFO: Number of nodes with available pods: 1 Apr 10 13:40:19.096: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 10 13:40:19.159: INFO: Number of nodes with available pods: 1 Apr 10 13:40:19.159: INFO: Number of running nodes: 0, number of available pods: 1 Apr 10 13:40:20.163: INFO: Number of nodes with available pods: 0 Apr 10 13:40:20.163: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 10 13:40:20.176: INFO: Number of nodes with available pods: 0 Apr 10 13:40:20.176: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:21.180: INFO: Number of nodes with available pods: 0 Apr 10 13:40:21.180: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:22.179: INFO: Number of nodes with available pods: 0 Apr 10 13:40:22.179: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:23.180: INFO: Number of nodes with available pods: 0 Apr 10 13:40:23.180: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:24.180: INFO: Number of nodes with available pods: 0 Apr 10 13:40:24.180: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:25.180: INFO: Number of nodes with available pods: 0 Apr 10 13:40:25.180: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:26.180: INFO: Number of nodes with available pods: 1 Apr 10 13:40:26.180: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5620, will wait for the garbage collector to delete the pods Apr 10 13:40:26.246: INFO: Deleting DaemonSet.extensions daemon-set took: 6.404926ms Apr 10 13:40:26.546: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.310628ms Apr 10 13:40:32.249: INFO: Number of nodes with available pods: 0 Apr 10 13:40:32.249: INFO: Number of running nodes: 0, number of available pods: 0 Apr 10 13:40:32.252: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5620/daemonsets","resourceVersion":"4667938"},"items":null} Apr 10 13:40:32.255: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5620/pods","resourceVersion":"4667938"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:40:32.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5620" for this suite. Apr 10 13:40:38.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:40:38.377: INFO: namespace daemonsets-5620 deletion completed in 6.093352629s • [SLOW TEST:22.453 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:40:38.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 10 13:40:38.495: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:38.498: INFO: Number of nodes with available pods: 0 Apr 10 13:40:38.498: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:39.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:39.506: INFO: Number of nodes with available pods: 0 Apr 10 13:40:39.506: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:40.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:40.505: INFO: Number of nodes with available pods: 0 Apr 10 13:40:40.505: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:41.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:41.506: INFO: Number of nodes with available pods: 0 Apr 10 13:40:41.506: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:42.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:42.507: INFO: Number of nodes with available pods: 1 Apr 10 13:40:42.507: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:43.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:43.507: INFO: Number of nodes with available pods: 2 Apr 10 13:40:43.507: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 10 13:40:43.568: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:43.577: INFO: Number of nodes with available pods: 1 Apr 10 13:40:43.577: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:44.584: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:44.588: INFO: Number of nodes with available pods: 1 Apr 10 13:40:44.588: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:45.583: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:45.586: INFO: Number of nodes with available pods: 1 Apr 10 13:40:45.586: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:46.582: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:46.585: INFO: Number of nodes with available pods: 1 Apr 10 13:40:46.585: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:47.582: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:47.585: INFO: Number of nodes with available pods: 1 Apr 10 13:40:47.586: INFO: Node iruya-worker is running more than one daemon pod Apr 10 13:40:48.583: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 13:40:48.587: INFO: Number of nodes with available pods: 2 Apr 10 13:40:48.587: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5958, will wait for the garbage collector to delete the pods Apr 10 13:40:48.654: INFO: Deleting DaemonSet.extensions daemon-set took: 7.215566ms Apr 10 13:40:48.954: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.261625ms Apr 10 13:41:02.274: INFO: Number of nodes with available pods: 0 Apr 10 13:41:02.274: INFO: Number of running nodes: 0, number of available pods: 0 Apr 10 13:41:02.277: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5958/daemonsets","resourceVersion":"4668072"},"items":null} Apr 10 13:41:02.280: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5958/pods","resourceVersion":"4668072"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:41:02.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5958" for this suite. Apr 10 13:41:08.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:41:08.400: INFO: namespace daemonsets-5958 deletion completed in 6.107270526s • [SLOW TEST:30.022 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:41:08.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6677.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6677.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6677.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6677.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6677.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6677.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 13:41:14.561: INFO: DNS probes using dns-6677/dns-test-4fd852ca-6ca9-46fe-a7ba-d1170cb1705c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:41:14.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6677" for this suite. Apr 10 13:41:20.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:41:20.726: INFO: namespace dns-6677 deletion completed in 6.117805595s • [SLOW TEST:12.325 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:41:20.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-rnw8 STEP: Creating a pod to test atomic-volume-subpath Apr 10 13:41:20.815: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rnw8" in namespace "subpath-4320" to be "success or failure" Apr 10 13:41:20.823: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.664931ms Apr 10 13:41:22.828: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013231001s Apr 10 13:41:24.833: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Running", Reason="", readiness=true. Elapsed: 4.018137913s Apr 10 13:41:26.837: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Running", Reason="", readiness=true. Elapsed: 6.022795951s Apr 10 13:41:28.841: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Running", Reason="", readiness=true. Elapsed: 8.026597762s Apr 10 13:41:30.846: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Running", Reason="", readiness=true. Elapsed: 10.031002802s Apr 10 13:41:32.850: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Running", Reason="", readiness=true. Elapsed: 12.035718282s Apr 10 13:41:34.855: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Running", Reason="", readiness=true. Elapsed: 14.040003964s Apr 10 13:41:36.859: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Running", Reason="", readiness=true. Elapsed: 16.043850274s Apr 10 13:41:38.863: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Running", Reason="", readiness=true. Elapsed: 18.047863707s Apr 10 13:41:40.867: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Running", Reason="", readiness=true. Elapsed: 20.0520802s Apr 10 13:41:42.871: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Running", Reason="", readiness=true. Elapsed: 22.0564949s Apr 10 13:41:44.875: INFO: Pod "pod-subpath-test-projected-rnw8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.060329371s STEP: Saw pod success Apr 10 13:41:44.875: INFO: Pod "pod-subpath-test-projected-rnw8" satisfied condition "success or failure" Apr 10 13:41:44.877: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-rnw8 container test-container-subpath-projected-rnw8: STEP: delete the pod Apr 10 13:41:44.897: INFO: Waiting for pod pod-subpath-test-projected-rnw8 to disappear Apr 10 13:41:44.907: INFO: Pod pod-subpath-test-projected-rnw8 no longer exists STEP: Deleting pod pod-subpath-test-projected-rnw8 Apr 10 13:41:44.907: INFO: Deleting pod "pod-subpath-test-projected-rnw8" in namespace "subpath-4320" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:41:44.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4320" for this suite. Apr 10 13:41:50.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:41:51.004: INFO: namespace subpath-4320 deletion completed in 6.091486913s • [SLOW TEST:30.278 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:41:51.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 10 13:41:59.094: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:41:59.112: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:01.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:01.116: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:03.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:03.117: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:05.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:05.116: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:07.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:07.116: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:09.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:09.116: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:11.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:11.117: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:13.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:13.116: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:15.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:15.116: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:17.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:17.116: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:19.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:19.117: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:21.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:21.116: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 13:42:23.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 13:42:23.116: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:42:23.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6499" for this suite. Apr 10 13:42:45.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:42:45.219: INFO: namespace container-lifecycle-hook-6499 deletion completed in 22.089979575s • [SLOW TEST:54.215 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:42:45.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5a175e87-42e4-47b2-b4ba-44b72c8f0a6e STEP: Creating a pod to test consume secrets Apr 10 13:42:45.314: INFO: Waiting up to 5m0s for pod "pod-secrets-b5f0882a-2636-4adc-8911-0d103f366cf5" in namespace "secrets-4790" to be "success or failure" Apr 10 13:42:45.327: INFO: Pod "pod-secrets-b5f0882a-2636-4adc-8911-0d103f366cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.132481ms Apr 10 13:42:47.331: INFO: Pod "pod-secrets-b5f0882a-2636-4adc-8911-0d103f366cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017239538s Apr 10 13:42:49.336: INFO: Pod "pod-secrets-b5f0882a-2636-4adc-8911-0d103f366cf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02169963s STEP: Saw pod success Apr 10 13:42:49.336: INFO: Pod "pod-secrets-b5f0882a-2636-4adc-8911-0d103f366cf5" satisfied condition "success or failure" Apr 10 13:42:49.339: INFO: Trying to get logs from node iruya-worker pod pod-secrets-b5f0882a-2636-4adc-8911-0d103f366cf5 container secret-volume-test: STEP: delete the pod Apr 10 13:42:49.369: INFO: Waiting for pod pod-secrets-b5f0882a-2636-4adc-8911-0d103f366cf5 to disappear Apr 10 13:42:49.381: INFO: Pod pod-secrets-b5f0882a-2636-4adc-8911-0d103f366cf5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:42:49.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4790" for this suite. Apr 10 13:42:55.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:42:55.475: INFO: namespace secrets-4790 deletion completed in 6.087365564s • [SLOW TEST:10.255 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:42:55.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 10 13:42:59.603: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 10 13:43:14.718: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:43:14.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8679" for this suite. Apr 10 13:43:20.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:43:20.811: INFO: namespace pods-8679 deletion completed in 6.087081772s • [SLOW TEST:25.336 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:43:20.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 10 13:43:20.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3242' Apr 10 13:43:20.951: INFO: stderr: "" Apr 10 13:43:20.951: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 10 13:43:20.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3242' Apr 10 13:43:32.166: INFO: stderr: "" Apr 10 13:43:32.166: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:43:32.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3242" for this suite. Apr 10 13:43:38.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:43:38.299: INFO: namespace kubectl-3242 deletion completed in 6.116293552s • [SLOW TEST:17.487 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:43:38.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 13:43:38.367: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74ee147f-7b91-4571-b4a4-9866eb19e3f8" in namespace "projected-2045" to be "success or failure" Apr 10 13:43:38.370: INFO: Pod "downwardapi-volume-74ee147f-7b91-4571-b4a4-9866eb19e3f8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.023959ms Apr 10 13:43:40.374: INFO: Pod "downwardapi-volume-74ee147f-7b91-4571-b4a4-9866eb19e3f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007232431s Apr 10 13:43:42.378: INFO: Pod "downwardapi-volume-74ee147f-7b91-4571-b4a4-9866eb19e3f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011450174s STEP: Saw pod success Apr 10 13:43:42.378: INFO: Pod "downwardapi-volume-74ee147f-7b91-4571-b4a4-9866eb19e3f8" satisfied condition "success or failure" Apr 10 13:43:42.381: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-74ee147f-7b91-4571-b4a4-9866eb19e3f8 container client-container: STEP: delete the pod Apr 10 13:43:42.427: INFO: Waiting for pod downwardapi-volume-74ee147f-7b91-4571-b4a4-9866eb19e3f8 to disappear Apr 10 13:43:42.430: INFO: Pod downwardapi-volume-74ee147f-7b91-4571-b4a4-9866eb19e3f8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:43:42.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2045" for this suite. Apr 10 13:43:48.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:43:48.524: INFO: namespace projected-2045 deletion completed in 6.091525556s • [SLOW TEST:10.225 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:43:48.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-5ppd STEP: Creating a pod to test atomic-volume-subpath Apr 10 13:43:48.642: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5ppd" in namespace "subpath-1990" to be "success or failure" Apr 10 13:43:48.657: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.927907ms Apr 10 13:43:50.659: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017491816s Apr 10 13:43:52.662: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Running", Reason="", readiness=true. Elapsed: 4.020186045s Apr 10 13:43:54.665: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Running", Reason="", readiness=true. Elapsed: 6.023480998s Apr 10 13:43:56.670: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Running", Reason="", readiness=true. Elapsed: 8.028347323s Apr 10 13:43:58.676: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Running", Reason="", readiness=true. Elapsed: 10.03401489s Apr 10 13:44:00.683: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Running", Reason="", readiness=true. Elapsed: 12.040979894s Apr 10 13:44:02.687: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Running", Reason="", readiness=true. Elapsed: 14.044927966s Apr 10 13:44:04.691: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Running", Reason="", readiness=true. Elapsed: 16.049120036s Apr 10 13:44:06.695: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Running", Reason="", readiness=true. Elapsed: 18.053313472s Apr 10 13:44:08.699: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Running", Reason="", readiness=true. Elapsed: 20.05739678s Apr 10 13:44:10.704: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Running", Reason="", readiness=true. Elapsed: 22.062266973s Apr 10 13:44:12.708: INFO: Pod "pod-subpath-test-configmap-5ppd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.066465507s STEP: Saw pod success Apr 10 13:44:12.708: INFO: Pod "pod-subpath-test-configmap-5ppd" satisfied condition "success or failure" Apr 10 13:44:12.712: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-5ppd container test-container-subpath-configmap-5ppd: STEP: delete the pod Apr 10 13:44:12.791: INFO: Waiting for pod pod-subpath-test-configmap-5ppd to disappear Apr 10 13:44:12.795: INFO: Pod pod-subpath-test-configmap-5ppd no longer exists STEP: Deleting pod pod-subpath-test-configmap-5ppd Apr 10 13:44:12.795: INFO: Deleting pod "pod-subpath-test-configmap-5ppd" in namespace "subpath-1990" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:44:12.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1990" for this suite. Apr 10 13:44:18.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:44:18.886: INFO: namespace subpath-1990 deletion completed in 6.08513194s • [SLOW TEST:30.361 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:44:18.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-848 I0410 13:44:18.934058 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-848, replica count: 1 I0410 13:44:19.984484 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 13:44:20.984751 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 13:44:21.984977 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 13:44:22.985294 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 10 13:44:23.108: INFO: Created: latency-svc-7qdws Apr 10 13:44:23.114: INFO: Got endpoints: latency-svc-7qdws [28.485332ms] Apr 10 13:44:23.175: INFO: Created: latency-svc-49dz5 Apr 10 13:44:23.186: INFO: Got endpoints: latency-svc-49dz5 [71.867287ms] Apr 10 13:44:23.234: INFO: Created: latency-svc-bdnsp Apr 10 13:44:23.246: INFO: Got endpoints: latency-svc-bdnsp [131.362345ms] Apr 10 13:44:23.307: INFO: Created: latency-svc-q74x5 Apr 10 13:44:23.331: INFO: Got endpoints: latency-svc-q74x5 [215.44939ms] Apr 10 13:44:23.360: INFO: Created: latency-svc-wjs52 Apr 10 13:44:23.372: INFO: Got endpoints: latency-svc-wjs52 [257.410082ms] Apr 10 13:44:23.388: INFO: Created: latency-svc-bk5z7 Apr 10 13:44:23.403: INFO: Got endpoints: latency-svc-bk5z7 [286.347047ms] Apr 10 13:44:23.444: INFO: Created: latency-svc-2nzff Apr 10 13:44:23.448: INFO: Got endpoints: latency-svc-2nzff [331.75851ms] Apr 10 13:44:23.487: INFO: Created: latency-svc-6gd4p Apr 10 13:44:23.499: INFO: Got endpoints: latency-svc-6gd4p [382.884859ms] Apr 10 13:44:23.516: INFO: Created: latency-svc-qgp57 Apr 10 13:44:23.535: INFO: Got endpoints: latency-svc-qgp57 [419.169923ms] Apr 10 13:44:23.582: INFO: Created: latency-svc-ng9tb Apr 10 13:44:23.586: INFO: Got endpoints: latency-svc-ng9tb [470.052095ms] Apr 10 13:44:23.635: INFO: Created: latency-svc-w4t2g Apr 10 13:44:23.643: INFO: Got endpoints: latency-svc-w4t2g [527.004125ms] Apr 10 13:44:23.667: INFO: Created: latency-svc-252l6 Apr 10 13:44:23.680: INFO: Got endpoints: latency-svc-252l6 [563.590581ms] Apr 10 13:44:23.726: INFO: Created: latency-svc-g8tfw Apr 10 13:44:23.734: INFO: Got endpoints: latency-svc-g8tfw [617.442184ms] Apr 10 13:44:23.775: INFO: Created: latency-svc-j47zq Apr 10 13:44:23.808: INFO: Got endpoints: latency-svc-j47zq [692.010478ms] Apr 10 13:44:23.906: INFO: Created: latency-svc-rkd2n Apr 10 13:44:23.909: INFO: Got endpoints: latency-svc-rkd2n [792.37651ms] Apr 10 13:44:23.966: INFO: Created: latency-svc-kpf2r Apr 10 13:44:23.982: INFO: Got endpoints: latency-svc-kpf2r [865.383242ms] Apr 10 13:44:24.001: INFO: Created: latency-svc-97jvw Apr 10 13:44:24.049: INFO: Got endpoints: latency-svc-97jvw [862.737058ms] Apr 10 13:44:24.067: INFO: Created: latency-svc-7shs2 Apr 10 13:44:24.089: INFO: Got endpoints: latency-svc-7shs2 [842.628588ms] Apr 10 13:44:24.111: INFO: Created: latency-svc-9vgq8 Apr 10 13:44:24.126: INFO: Got endpoints: latency-svc-9vgq8 [795.033882ms] Apr 10 13:44:24.146: INFO: Created: latency-svc-5clwm Apr 10 13:44:24.187: INFO: Got endpoints: latency-svc-5clwm [814.672085ms] Apr 10 13:44:24.205: INFO: Created: latency-svc-7k4hk Apr 10 13:44:24.216: INFO: Got endpoints: latency-svc-7k4hk [813.475672ms] Apr 10 13:44:24.234: INFO: Created: latency-svc-s2ppq Apr 10 13:44:24.246: INFO: Got endpoints: latency-svc-s2ppq [798.510626ms] Apr 10 13:44:24.264: INFO: Created: latency-svc-9b4p7 Apr 10 13:44:24.276: INFO: Got endpoints: latency-svc-9b4p7 [777.521596ms] Apr 10 13:44:24.318: INFO: Created: latency-svc-j6ngg Apr 10 13:44:24.322: INFO: Got endpoints: latency-svc-j6ngg [787.076657ms] Apr 10 13:44:24.368: INFO: Created: latency-svc-86z7g Apr 10 13:44:24.379: INFO: Got endpoints: latency-svc-86z7g [792.988557ms] Apr 10 13:44:24.403: INFO: Created: latency-svc-rpslw Apr 10 13:44:24.438: INFO: Got endpoints: latency-svc-rpslw [794.372876ms] Apr 10 13:44:24.458: INFO: Created: latency-svc-mlv2l Apr 10 13:44:24.470: INFO: Got endpoints: latency-svc-mlv2l [789.877235ms] Apr 10 13:44:24.488: INFO: Created: latency-svc-lh8s9 Apr 10 13:44:24.500: INFO: Got endpoints: latency-svc-lh8s9 [766.162772ms] Apr 10 13:44:24.518: INFO: Created: latency-svc-4vmlv Apr 10 13:44:24.530: INFO: Got endpoints: latency-svc-4vmlv [721.777194ms] Apr 10 13:44:24.577: INFO: Created: latency-svc-d9548 Apr 10 13:44:24.588: INFO: Got endpoints: latency-svc-d9548 [679.404896ms] Apr 10 13:44:24.612: INFO: Created: latency-svc-kv4pd Apr 10 13:44:24.621: INFO: Got endpoints: latency-svc-kv4pd [638.957531ms] Apr 10 13:44:24.643: INFO: Created: latency-svc-rhbkb Apr 10 13:44:24.651: INFO: Got endpoints: latency-svc-rhbkb [602.090479ms] Apr 10 13:44:24.668: INFO: Created: latency-svc-v8f64 Apr 10 13:44:24.719: INFO: Got endpoints: latency-svc-v8f64 [630.047162ms] Apr 10 13:44:24.728: INFO: Created: latency-svc-k5rjd Apr 10 13:44:24.742: INFO: Got endpoints: latency-svc-k5rjd [616.097512ms] Apr 10 13:44:24.758: INFO: Created: latency-svc-qr7vs Apr 10 13:44:24.786: INFO: Got endpoints: latency-svc-qr7vs [599.468642ms] Apr 10 13:44:24.816: INFO: Created: latency-svc-p628z Apr 10 13:44:24.845: INFO: Got endpoints: latency-svc-p628z [628.588072ms] Apr 10 13:44:24.908: INFO: Created: latency-svc-8wtgj Apr 10 13:44:24.935: INFO: Got endpoints: latency-svc-8wtgj [688.857119ms] Apr 10 13:44:24.996: INFO: Created: latency-svc-hkckd Apr 10 13:44:24.998: INFO: Got endpoints: latency-svc-hkckd [721.476984ms] Apr 10 13:44:25.021: INFO: Created: latency-svc-cn7gz Apr 10 13:44:25.031: INFO: Got endpoints: latency-svc-cn7gz [709.37817ms] Apr 10 13:44:25.051: INFO: Created: latency-svc-lnn92 Apr 10 13:44:25.068: INFO: Got endpoints: latency-svc-lnn92 [688.37854ms] Apr 10 13:44:25.094: INFO: Created: latency-svc-4vqfl Apr 10 13:44:25.162: INFO: Got endpoints: latency-svc-4vqfl [724.517631ms] Apr 10 13:44:25.166: INFO: Created: latency-svc-fbxn9 Apr 10 13:44:25.188: INFO: Got endpoints: latency-svc-fbxn9 [717.608663ms] Apr 10 13:44:25.212: INFO: Created: latency-svc-qm4jt Apr 10 13:44:25.218: INFO: Got endpoints: latency-svc-qm4jt [717.933544ms] Apr 10 13:44:25.237: INFO: Created: latency-svc-6rqm6 Apr 10 13:44:25.248: INFO: Got endpoints: latency-svc-6rqm6 [717.831804ms] Apr 10 13:44:25.296: INFO: Created: latency-svc-4rcp8 Apr 10 13:44:25.300: INFO: Got endpoints: latency-svc-4rcp8 [711.569652ms] Apr 10 13:44:25.328: INFO: Created: latency-svc-4qkrr Apr 10 13:44:25.338: INFO: Got endpoints: latency-svc-4qkrr [717.448065ms] Apr 10 13:44:25.364: INFO: Created: latency-svc-8snj9 Apr 10 13:44:25.375: INFO: Got endpoints: latency-svc-8snj9 [723.507883ms] Apr 10 13:44:25.392: INFO: Created: latency-svc-mlkj7 Apr 10 13:44:25.432: INFO: Got endpoints: latency-svc-mlkj7 [712.520095ms] Apr 10 13:44:25.452: INFO: Created: latency-svc-ch6bw Apr 10 13:44:25.466: INFO: Got endpoints: latency-svc-ch6bw [723.747509ms] Apr 10 13:44:25.483: INFO: Created: latency-svc-6gmfx Apr 10 13:44:25.507: INFO: Got endpoints: latency-svc-6gmfx [721.102093ms] Apr 10 13:44:25.576: INFO: Created: latency-svc-nkqr4 Apr 10 13:44:25.579: INFO: Got endpoints: latency-svc-nkqr4 [734.680182ms] Apr 10 13:44:25.650: INFO: Created: latency-svc-d4khn Apr 10 13:44:25.664: INFO: Got endpoints: latency-svc-d4khn [728.538757ms] Apr 10 13:44:25.726: INFO: Created: latency-svc-6blhc Apr 10 13:44:25.730: INFO: Got endpoints: latency-svc-6blhc [731.794804ms] Apr 10 13:44:25.781: INFO: Created: latency-svc-555px Apr 10 13:44:25.791: INFO: Got endpoints: latency-svc-555px [759.961062ms] Apr 10 13:44:25.807: INFO: Created: latency-svc-frjmz Apr 10 13:44:25.864: INFO: Got endpoints: latency-svc-frjmz [796.152772ms] Apr 10 13:44:25.878: INFO: Created: latency-svc-9jm7n Apr 10 13:44:25.895: INFO: Got endpoints: latency-svc-9jm7n [732.76115ms] Apr 10 13:44:25.921: INFO: Created: latency-svc-ptjm6 Apr 10 13:44:25.931: INFO: Got endpoints: latency-svc-ptjm6 [743.613378ms] Apr 10 13:44:25.951: INFO: Created: latency-svc-rj5g7 Apr 10 13:44:26.019: INFO: Got endpoints: latency-svc-rj5g7 [800.509949ms] Apr 10 13:44:26.021: INFO: Created: latency-svc-xvhbr Apr 10 13:44:26.029: INFO: Got endpoints: latency-svc-xvhbr [780.512921ms] Apr 10 13:44:26.065: INFO: Created: latency-svc-r5xhq Apr 10 13:44:26.079: INFO: Got endpoints: latency-svc-r5xhq [779.142978ms] Apr 10 13:44:26.101: INFO: Created: latency-svc-hpj6x Apr 10 13:44:26.169: INFO: Got endpoints: latency-svc-hpj6x [829.96792ms] Apr 10 13:44:26.171: INFO: Created: latency-svc-8wgq6 Apr 10 13:44:26.178: INFO: Got endpoints: latency-svc-8wgq6 [803.832926ms] Apr 10 13:44:26.210: INFO: Created: latency-svc-s9xd4 Apr 10 13:44:26.221: INFO: Got endpoints: latency-svc-s9xd4 [789.17211ms] Apr 10 13:44:26.239: INFO: Created: latency-svc-rp6ds Apr 10 13:44:26.251: INFO: Got endpoints: latency-svc-rp6ds [785.369247ms] Apr 10 13:44:26.318: INFO: Created: latency-svc-p6gzt Apr 10 13:44:26.322: INFO: Got endpoints: latency-svc-p6gzt [814.372074ms] Apr 10 13:44:26.348: INFO: Created: latency-svc-bj2tz Apr 10 13:44:26.360: INFO: Got endpoints: latency-svc-bj2tz [780.071584ms] Apr 10 13:44:26.384: INFO: Created: latency-svc-psp5v Apr 10 13:44:26.408: INFO: Got endpoints: latency-svc-psp5v [743.872005ms] Apr 10 13:44:26.477: INFO: Created: latency-svc-5hct9 Apr 10 13:44:26.504: INFO: Got endpoints: latency-svc-5hct9 [774.603065ms] Apr 10 13:44:26.534: INFO: Created: latency-svc-vx2q5 Apr 10 13:44:26.550: INFO: Got endpoints: latency-svc-vx2q5 [759.049217ms] Apr 10 13:44:26.570: INFO: Created: latency-svc-4g9n6 Apr 10 13:44:26.599: INFO: Got endpoints: latency-svc-4g9n6 [735.269873ms] Apr 10 13:44:26.612: INFO: Created: latency-svc-rml6r Apr 10 13:44:26.623: INFO: Got endpoints: latency-svc-rml6r [727.633371ms] Apr 10 13:44:26.647: INFO: Created: latency-svc-9gp87 Apr 10 13:44:26.670: INFO: Got endpoints: latency-svc-9gp87 [738.798505ms] Apr 10 13:44:26.758: INFO: Created: latency-svc-9jdfd Apr 10 13:44:26.780: INFO: Created: latency-svc-sk7lq Apr 10 13:44:26.780: INFO: Got endpoints: latency-svc-9jdfd [761.50965ms] Apr 10 13:44:26.792: INFO: Got endpoints: latency-svc-sk7lq [762.802894ms] Apr 10 13:44:26.810: INFO: Created: latency-svc-4jmc4 Apr 10 13:44:26.899: INFO: Got endpoints: latency-svc-4jmc4 [819.776452ms] Apr 10 13:44:26.912: INFO: Created: latency-svc-5b68l Apr 10 13:44:26.953: INFO: Got endpoints: latency-svc-5b68l [783.824764ms] Apr 10 13:44:27.038: INFO: Created: latency-svc-8pmw2 Apr 10 13:44:27.041: INFO: Got endpoints: latency-svc-8pmw2 [862.662725ms] Apr 10 13:44:27.074: INFO: Created: latency-svc-hvtnf Apr 10 13:44:27.098: INFO: Got endpoints: latency-svc-hvtnf [876.921166ms] Apr 10 13:44:27.129: INFO: Created: latency-svc-zbqr9 Apr 10 13:44:27.198: INFO: Got endpoints: latency-svc-zbqr9 [947.237486ms] Apr 10 13:44:27.223: INFO: Created: latency-svc-6nrg2 Apr 10 13:44:27.247: INFO: Got endpoints: latency-svc-6nrg2 [924.767755ms] Apr 10 13:44:27.272: INFO: Created: latency-svc-rf4tn Apr 10 13:44:27.296: INFO: Got endpoints: latency-svc-rf4tn [936.120739ms] Apr 10 13:44:27.348: INFO: Created: latency-svc-dk4b9 Apr 10 13:44:27.357: INFO: Got endpoints: latency-svc-dk4b9 [949.424362ms] Apr 10 13:44:27.409: INFO: Created: latency-svc-nkzj5 Apr 10 13:44:27.429: INFO: Got endpoints: latency-svc-nkzj5 [924.943885ms] Apr 10 13:44:27.480: INFO: Created: latency-svc-gtwvj Apr 10 13:44:27.487: INFO: Got endpoints: latency-svc-gtwvj [936.663414ms] Apr 10 13:44:27.506: INFO: Created: latency-svc-js6xk Apr 10 13:44:27.524: INFO: Got endpoints: latency-svc-js6xk [924.623632ms] Apr 10 13:44:27.541: INFO: Created: latency-svc-lh4mc Apr 10 13:44:27.554: INFO: Got endpoints: latency-svc-lh4mc [930.935374ms] Apr 10 13:44:27.630: INFO: Created: latency-svc-tnj4w Apr 10 13:44:27.633: INFO: Got endpoints: latency-svc-tnj4w [962.856419ms] Apr 10 13:44:27.656: INFO: Created: latency-svc-2xsfh Apr 10 13:44:27.668: INFO: Got endpoints: latency-svc-2xsfh [887.75681ms] Apr 10 13:44:27.704: INFO: Created: latency-svc-qd2rr Apr 10 13:44:27.716: INFO: Got endpoints: latency-svc-qd2rr [924.639112ms] Apr 10 13:44:27.780: INFO: Created: latency-svc-9lmfn Apr 10 13:44:27.784: INFO: Got endpoints: latency-svc-9lmfn [884.84499ms] Apr 10 13:44:27.817: INFO: Created: latency-svc-qjp6n Apr 10 13:44:27.837: INFO: Got endpoints: latency-svc-qjp6n [884.485706ms] Apr 10 13:44:27.860: INFO: Created: latency-svc-mq7z7 Apr 10 13:44:27.876: INFO: Got endpoints: latency-svc-mq7z7 [835.001999ms] Apr 10 13:44:27.923: INFO: Created: latency-svc-j4sp6 Apr 10 13:44:27.926: INFO: Got endpoints: latency-svc-j4sp6 [828.1801ms] Apr 10 13:44:27.968: INFO: Created: latency-svc-jdj7w Apr 10 13:44:27.982: INFO: Got endpoints: latency-svc-jdj7w [783.326715ms] Apr 10 13:44:28.003: INFO: Created: latency-svc-j5w9n Apr 10 13:44:28.012: INFO: Got endpoints: latency-svc-j5w9n [765.124232ms] Apr 10 13:44:28.079: INFO: Created: latency-svc-qxxjq Apr 10 13:44:28.108: INFO: Got endpoints: latency-svc-qxxjq [812.376227ms] Apr 10 13:44:28.142: INFO: Created: latency-svc-br7lp Apr 10 13:44:28.156: INFO: Got endpoints: latency-svc-br7lp [798.957772ms] Apr 10 13:44:28.216: INFO: Created: latency-svc-bw8tf Apr 10 13:44:28.219: INFO: Got endpoints: latency-svc-bw8tf [789.716814ms] Apr 10 13:44:28.242: INFO: Created: latency-svc-w7pf2 Apr 10 13:44:28.260: INFO: Got endpoints: latency-svc-w7pf2 [773.113901ms] Apr 10 13:44:28.285: INFO: Created: latency-svc-9sfb4 Apr 10 13:44:28.310: INFO: Got endpoints: latency-svc-9sfb4 [786.224112ms] Apr 10 13:44:28.366: INFO: Created: latency-svc-njr4c Apr 10 13:44:28.381: INFO: Got endpoints: latency-svc-njr4c [826.915894ms] Apr 10 13:44:28.405: INFO: Created: latency-svc-4hsxj Apr 10 13:44:28.428: INFO: Got endpoints: latency-svc-4hsxj [795.178647ms] Apr 10 13:44:28.453: INFO: Created: latency-svc-p8qsb Apr 10 13:44:28.465: INFO: Got endpoints: latency-svc-p8qsb [796.55217ms] Apr 10 13:44:28.510: INFO: Created: latency-svc-fvnqh Apr 10 13:44:28.531: INFO: Got endpoints: latency-svc-fvnqh [815.086543ms] Apr 10 13:44:28.532: INFO: Created: latency-svc-vvpdd Apr 10 13:44:28.543: INFO: Got endpoints: latency-svc-vvpdd [758.739745ms] Apr 10 13:44:28.563: INFO: Created: latency-svc-rvd7q Apr 10 13:44:28.573: INFO: Got endpoints: latency-svc-rvd7q [736.205189ms] Apr 10 13:44:28.590: INFO: Created: latency-svc-8zl2f Apr 10 13:44:28.604: INFO: Got endpoints: latency-svc-8zl2f [727.378558ms] Apr 10 13:44:28.660: INFO: Created: latency-svc-bnqmr Apr 10 13:44:28.662: INFO: Got endpoints: latency-svc-bnqmr [736.123272ms] Apr 10 13:44:28.688: INFO: Created: latency-svc-v88t2 Apr 10 13:44:28.700: INFO: Got endpoints: latency-svc-v88t2 [718.283635ms] Apr 10 13:44:28.724: INFO: Created: latency-svc-vs7k7 Apr 10 13:44:28.737: INFO: Got endpoints: latency-svc-vs7k7 [725.541732ms] Apr 10 13:44:28.753: INFO: Created: latency-svc-h2g7q Apr 10 13:44:28.809: INFO: Got endpoints: latency-svc-h2g7q [700.887992ms] Apr 10 13:44:28.819: INFO: Created: latency-svc-wsf8m Apr 10 13:44:28.833: INFO: Got endpoints: latency-svc-wsf8m [676.968355ms] Apr 10 13:44:28.862: INFO: Created: latency-svc-wd46n Apr 10 13:44:28.875: INFO: Got endpoints: latency-svc-wd46n [655.929688ms] Apr 10 13:44:28.898: INFO: Created: latency-svc-fk9gr Apr 10 13:44:28.942: INFO: Got endpoints: latency-svc-fk9gr [681.318591ms] Apr 10 13:44:28.952: INFO: Created: latency-svc-dhrfn Apr 10 13:44:28.967: INFO: Got endpoints: latency-svc-dhrfn [656.338795ms] Apr 10 13:44:29.005: INFO: Created: latency-svc-g9c2t Apr 10 13:44:29.021: INFO: Got endpoints: latency-svc-g9c2t [640.113088ms] Apr 10 13:44:29.041: INFO: Created: latency-svc-b7s4r Apr 10 13:44:29.108: INFO: Got endpoints: latency-svc-b7s4r [680.043827ms] Apr 10 13:44:29.111: INFO: Created: latency-svc-qqtxh Apr 10 13:44:29.117: INFO: Got endpoints: latency-svc-qqtxh [652.148596ms] Apr 10 13:44:29.156: INFO: Created: latency-svc-p4ztt Apr 10 13:44:29.171: INFO: Got endpoints: latency-svc-p4ztt [639.668957ms] Apr 10 13:44:29.191: INFO: Created: latency-svc-h5bll Apr 10 13:44:29.202: INFO: Got endpoints: latency-svc-h5bll [658.735968ms] Apr 10 13:44:29.248: INFO: Created: latency-svc-nhtmk Apr 10 13:44:29.252: INFO: Got endpoints: latency-svc-nhtmk [678.62218ms] Apr 10 13:44:29.270: INFO: Created: latency-svc-4jqrh Apr 10 13:44:29.287: INFO: Got endpoints: latency-svc-4jqrh [682.940426ms] Apr 10 13:44:29.306: INFO: Created: latency-svc-s26nn Apr 10 13:44:29.323: INFO: Got endpoints: latency-svc-s26nn [660.065251ms] Apr 10 13:44:29.342: INFO: Created: latency-svc-27hfv Apr 10 13:44:29.378: INFO: Got endpoints: latency-svc-27hfv [677.529223ms] Apr 10 13:44:29.401: INFO: Created: latency-svc-w28ps Apr 10 13:44:29.414: INFO: Got endpoints: latency-svc-w28ps [676.326356ms] Apr 10 13:44:29.431: INFO: Created: latency-svc-85fbc Apr 10 13:44:29.443: INFO: Got endpoints: latency-svc-85fbc [634.000903ms] Apr 10 13:44:29.462: INFO: Created: latency-svc-9xpcv Apr 10 13:44:29.474: INFO: Got endpoints: latency-svc-9xpcv [640.458914ms] Apr 10 13:44:29.522: INFO: Created: latency-svc-wgr9t Apr 10 13:44:29.524: INFO: Got endpoints: latency-svc-wgr9t [649.12173ms] Apr 10 13:44:29.557: INFO: Created: latency-svc-mvrgx Apr 10 13:44:29.570: INFO: Got endpoints: latency-svc-mvrgx [628.552832ms] Apr 10 13:44:29.617: INFO: Created: latency-svc-zgg5z Apr 10 13:44:29.671: INFO: Got endpoints: latency-svc-zgg5z [704.757321ms] Apr 10 13:44:29.678: INFO: Created: latency-svc-2xwzl Apr 10 13:44:29.691: INFO: Got endpoints: latency-svc-2xwzl [669.636675ms] Apr 10 13:44:29.729: INFO: Created: latency-svc-x29q8 Apr 10 13:44:29.739: INFO: Got endpoints: latency-svc-x29q8 [630.331549ms] Apr 10 13:44:29.810: INFO: Created: latency-svc-vw542 Apr 10 13:44:29.812: INFO: Got endpoints: latency-svc-vw542 [695.438166ms] Apr 10 13:44:29.839: INFO: Created: latency-svc-hdrvv Apr 10 13:44:29.858: INFO: Got endpoints: latency-svc-hdrvv [686.71713ms] Apr 10 13:44:29.888: INFO: Created: latency-svc-zfbpn Apr 10 13:44:29.965: INFO: Got endpoints: latency-svc-zfbpn [762.985652ms] Apr 10 13:44:29.967: INFO: Created: latency-svc-hszc5 Apr 10 13:44:29.974: INFO: Got endpoints: latency-svc-hszc5 [722.02116ms] Apr 10 13:44:29.996: INFO: Created: latency-svc-54nh8 Apr 10 13:44:30.011: INFO: Got endpoints: latency-svc-54nh8 [723.787769ms] Apr 10 13:44:30.031: INFO: Created: latency-svc-m6vlk Apr 10 13:44:30.041: INFO: Got endpoints: latency-svc-m6vlk [718.88441ms] Apr 10 13:44:30.062: INFO: Created: latency-svc-fzszf Apr 10 13:44:30.114: INFO: Got endpoints: latency-svc-fzszf [736.460708ms] Apr 10 13:44:30.135: INFO: Created: latency-svc-x9ln5 Apr 10 13:44:30.149: INFO: Got endpoints: latency-svc-x9ln5 [735.49721ms] Apr 10 13:44:30.181: INFO: Created: latency-svc-46m66 Apr 10 13:44:30.192: INFO: Got endpoints: latency-svc-46m66 [748.455434ms] Apr 10 13:44:30.211: INFO: Created: latency-svc-nmb6s Apr 10 13:44:30.252: INFO: Got endpoints: latency-svc-nmb6s [778.276165ms] Apr 10 13:44:30.266: INFO: Created: latency-svc-2wsl4 Apr 10 13:44:30.296: INFO: Got endpoints: latency-svc-2wsl4 [771.782657ms] Apr 10 13:44:30.326: INFO: Created: latency-svc-5279s Apr 10 13:44:30.337: INFO: Got endpoints: latency-svc-5279s [766.661258ms] Apr 10 13:44:30.390: INFO: Created: latency-svc-wb28n Apr 10 13:44:30.393: INFO: Got endpoints: latency-svc-wb28n [721.090311ms] Apr 10 13:44:30.415: INFO: Created: latency-svc-p5sb9 Apr 10 13:44:30.427: INFO: Got endpoints: latency-svc-p5sb9 [736.454241ms] Apr 10 13:44:30.445: INFO: Created: latency-svc-cd2b5 Apr 10 13:44:30.488: INFO: Got endpoints: latency-svc-cd2b5 [748.822754ms] Apr 10 13:44:30.546: INFO: Created: latency-svc-smm28 Apr 10 13:44:30.554: INFO: Got endpoints: latency-svc-smm28 [741.374694ms] Apr 10 13:44:30.572: INFO: Created: latency-svc-mvmbg Apr 10 13:44:30.584: INFO: Got endpoints: latency-svc-mvmbg [726.277537ms] Apr 10 13:44:30.601: INFO: Created: latency-svc-9dmhn Apr 10 13:44:30.608: INFO: Got endpoints: latency-svc-9dmhn [643.870444ms] Apr 10 13:44:30.637: INFO: Created: latency-svc-fksnt Apr 10 13:44:30.695: INFO: Got endpoints: latency-svc-fksnt [721.041894ms] Apr 10 13:44:30.705: INFO: Created: latency-svc-8z79n Apr 10 13:44:30.717: INFO: Got endpoints: latency-svc-8z79n [706.72414ms] Apr 10 13:44:30.740: INFO: Created: latency-svc-wv8qg Apr 10 13:44:30.754: INFO: Got endpoints: latency-svc-wv8qg [712.006213ms] Apr 10 13:44:30.774: INFO: Created: latency-svc-xgbfv Apr 10 13:44:30.790: INFO: Got endpoints: latency-svc-xgbfv [675.589966ms] Apr 10 13:44:30.853: INFO: Created: latency-svc-4wgrt Apr 10 13:44:30.862: INFO: Got endpoints: latency-svc-4wgrt [712.362206ms] Apr 10 13:44:30.883: INFO: Created: latency-svc-n9vg6 Apr 10 13:44:30.908: INFO: Got endpoints: latency-svc-n9vg6 [716.112521ms] Apr 10 13:44:30.971: INFO: Created: latency-svc-gjhlk Apr 10 13:44:30.973: INFO: Got endpoints: latency-svc-gjhlk [720.916148ms] Apr 10 13:44:31.015: INFO: Created: latency-svc-zzlrm Apr 10 13:44:31.031: INFO: Got endpoints: latency-svc-zzlrm [735.026597ms] Apr 10 13:44:31.052: INFO: Created: latency-svc-kdztx Apr 10 13:44:31.061: INFO: Got endpoints: latency-svc-kdztx [724.285199ms] Apr 10 13:44:31.121: INFO: Created: latency-svc-rp9zf Apr 10 13:44:31.142: INFO: Got endpoints: latency-svc-rp9zf [749.834883ms] Apr 10 13:44:31.194: INFO: Created: latency-svc-smpl7 Apr 10 13:44:31.200: INFO: Got endpoints: latency-svc-smpl7 [772.327003ms] Apr 10 13:44:31.270: INFO: Created: latency-svc-9frj8 Apr 10 13:44:31.273: INFO: Got endpoints: latency-svc-9frj8 [785.286712ms] Apr 10 13:44:31.300: INFO: Created: latency-svc-7thr9 Apr 10 13:44:31.314: INFO: Got endpoints: latency-svc-7thr9 [760.615855ms] Apr 10 13:44:31.334: INFO: Created: latency-svc-wmtpv Apr 10 13:44:31.345: INFO: Got endpoints: latency-svc-wmtpv [760.499242ms] Apr 10 13:44:31.370: INFO: Created: latency-svc-jw967 Apr 10 13:44:31.423: INFO: Got endpoints: latency-svc-jw967 [814.547672ms] Apr 10 13:44:31.447: INFO: Created: latency-svc-vm2l5 Apr 10 13:44:31.485: INFO: Got endpoints: latency-svc-vm2l5 [789.486882ms] Apr 10 13:44:31.500: INFO: Created: latency-svc-zjgpb Apr 10 13:44:31.539: INFO: Got endpoints: latency-svc-zjgpb [821.891569ms] Apr 10 13:44:31.550: INFO: Created: latency-svc-ds9zg Apr 10 13:44:31.562: INFO: Got endpoints: latency-svc-ds9zg [808.659707ms] Apr 10 13:44:31.580: INFO: Created: latency-svc-l4c8c Apr 10 13:44:31.592: INFO: Got endpoints: latency-svc-l4c8c [802.473381ms] Apr 10 13:44:31.610: INFO: Created: latency-svc-vtk9c Apr 10 13:44:31.666: INFO: Got endpoints: latency-svc-vtk9c [803.673435ms] Apr 10 13:44:31.693: INFO: Created: latency-svc-6fjdt Apr 10 13:44:31.707: INFO: Got endpoints: latency-svc-6fjdt [799.107102ms] Apr 10 13:44:31.730: INFO: Created: latency-svc-t8nj8 Apr 10 13:44:31.743: INFO: Got endpoints: latency-svc-t8nj8 [770.429133ms] Apr 10 13:44:31.760: INFO: Created: latency-svc-vljbh Apr 10 13:44:31.799: INFO: Got endpoints: latency-svc-vljbh [767.448813ms] Apr 10 13:44:31.802: INFO: Created: latency-svc-l2h68 Apr 10 13:44:31.816: INFO: Got endpoints: latency-svc-l2h68 [754.568212ms] Apr 10 13:44:31.843: INFO: Created: latency-svc-tskz2 Apr 10 13:44:31.858: INFO: Got endpoints: latency-svc-tskz2 [715.677428ms] Apr 10 13:44:31.878: INFO: Created: latency-svc-ssq4q Apr 10 13:44:31.890: INFO: Got endpoints: latency-svc-ssq4q [690.192923ms] Apr 10 13:44:31.948: INFO: Created: latency-svc-h842m Apr 10 13:44:31.976: INFO: Got endpoints: latency-svc-h842m [703.320163ms] Apr 10 13:44:31.976: INFO: Created: latency-svc-bl28b Apr 10 13:44:31.991: INFO: Got endpoints: latency-svc-bl28b [101.285894ms] Apr 10 13:44:32.011: INFO: Created: latency-svc-t97rz Apr 10 13:44:32.027: INFO: Got endpoints: latency-svc-t97rz [713.048109ms] Apr 10 13:44:32.085: INFO: Created: latency-svc-nvkr8 Apr 10 13:44:32.088: INFO: Got endpoints: latency-svc-nvkr8 [743.09741ms] Apr 10 13:44:32.112: INFO: Created: latency-svc-zmrbb Apr 10 13:44:32.124: INFO: Got endpoints: latency-svc-zmrbb [700.786232ms] Apr 10 13:44:32.144: INFO: Created: latency-svc-v7cf5 Apr 10 13:44:32.154: INFO: Got endpoints: latency-svc-v7cf5 [669.626973ms] Apr 10 13:44:32.173: INFO: Created: latency-svc-lsbcx Apr 10 13:44:32.246: INFO: Got endpoints: latency-svc-lsbcx [706.878723ms] Apr 10 13:44:32.251: INFO: Created: latency-svc-vdhrv Apr 10 13:44:32.263: INFO: Got endpoints: latency-svc-vdhrv [700.635602ms] Apr 10 13:44:32.280: INFO: Created: latency-svc-5pbr8 Apr 10 13:44:32.293: INFO: Got endpoints: latency-svc-5pbr8 [700.947553ms] Apr 10 13:44:32.323: INFO: Created: latency-svc-fqgkd Apr 10 13:44:32.336: INFO: Got endpoints: latency-svc-fqgkd [670.041698ms] Apr 10 13:44:32.384: INFO: Created: latency-svc-98kzl Apr 10 13:44:32.390: INFO: Got endpoints: latency-svc-98kzl [682.871445ms] Apr 10 13:44:32.407: INFO: Created: latency-svc-ff4gf Apr 10 13:44:32.431: INFO: Got endpoints: latency-svc-ff4gf [687.840084ms] Apr 10 13:44:32.456: INFO: Created: latency-svc-9nbnd Apr 10 13:44:32.468: INFO: Got endpoints: latency-svc-9nbnd [669.675686ms] Apr 10 13:44:32.523: INFO: Created: latency-svc-2lfc9 Apr 10 13:44:32.544: INFO: Got endpoints: latency-svc-2lfc9 [728.511553ms] Apr 10 13:44:32.545: INFO: Created: latency-svc-rthjf Apr 10 13:44:32.559: INFO: Got endpoints: latency-svc-rthjf [700.755701ms] Apr 10 13:44:32.581: INFO: Created: latency-svc-jlf55 Apr 10 13:44:32.606: INFO: Got endpoints: latency-svc-jlf55 [629.390198ms] Apr 10 13:44:32.660: INFO: Created: latency-svc-z9pxf Apr 10 13:44:32.662: INFO: Got endpoints: latency-svc-z9pxf [670.93354ms] Apr 10 13:44:32.690: INFO: Created: latency-svc-q7zlx Apr 10 13:44:32.698: INFO: Got endpoints: latency-svc-q7zlx [670.547907ms] Apr 10 13:44:32.725: INFO: Created: latency-svc-dxll7 Apr 10 13:44:32.740: INFO: Got endpoints: latency-svc-dxll7 [651.968025ms] Apr 10 13:44:32.797: INFO: Created: latency-svc-zgjdc Apr 10 13:44:32.806: INFO: Got endpoints: latency-svc-zgjdc [682.175211ms] Apr 10 13:44:32.828: INFO: Created: latency-svc-zdl9p Apr 10 13:44:32.852: INFO: Got endpoints: latency-svc-zdl9p [697.511646ms] Apr 10 13:44:32.883: INFO: Created: latency-svc-stmnn Apr 10 13:44:32.891: INFO: Got endpoints: latency-svc-stmnn [644.819148ms] Apr 10 13:44:32.935: INFO: Created: latency-svc-qs96r Apr 10 13:44:32.937: INFO: Got endpoints: latency-svc-qs96r [674.501943ms] Apr 10 13:44:32.971: INFO: Created: latency-svc-lfzqf Apr 10 13:44:32.984: INFO: Got endpoints: latency-svc-lfzqf [690.956817ms] Apr 10 13:44:33.000: INFO: Created: latency-svc-nlb68 Apr 10 13:44:33.012: INFO: Got endpoints: latency-svc-nlb68 [676.175206ms] Apr 10 13:44:33.012: INFO: Latencies: [71.867287ms 101.285894ms 131.362345ms 215.44939ms 257.410082ms 286.347047ms 331.75851ms 382.884859ms 419.169923ms 470.052095ms 527.004125ms 563.590581ms 599.468642ms 602.090479ms 616.097512ms 617.442184ms 628.552832ms 628.588072ms 629.390198ms 630.047162ms 630.331549ms 634.000903ms 638.957531ms 639.668957ms 640.113088ms 640.458914ms 643.870444ms 644.819148ms 649.12173ms 651.968025ms 652.148596ms 655.929688ms 656.338795ms 658.735968ms 660.065251ms 669.626973ms 669.636675ms 669.675686ms 670.041698ms 670.547907ms 670.93354ms 674.501943ms 675.589966ms 676.175206ms 676.326356ms 676.968355ms 677.529223ms 678.62218ms 679.404896ms 680.043827ms 681.318591ms 682.175211ms 682.871445ms 682.940426ms 686.71713ms 687.840084ms 688.37854ms 688.857119ms 690.192923ms 690.956817ms 692.010478ms 695.438166ms 697.511646ms 700.635602ms 700.755701ms 700.786232ms 700.887992ms 700.947553ms 703.320163ms 704.757321ms 706.72414ms 706.878723ms 709.37817ms 711.569652ms 712.006213ms 712.362206ms 712.520095ms 713.048109ms 715.677428ms 716.112521ms 717.448065ms 717.608663ms 717.831804ms 717.933544ms 718.283635ms 718.88441ms 720.916148ms 721.041894ms 721.090311ms 721.102093ms 721.476984ms 721.777194ms 722.02116ms 723.507883ms 723.747509ms 723.787769ms 724.285199ms 724.517631ms 725.541732ms 726.277537ms 727.378558ms 727.633371ms 728.511553ms 728.538757ms 731.794804ms 732.76115ms 734.680182ms 735.026597ms 735.269873ms 735.49721ms 736.123272ms 736.205189ms 736.454241ms 736.460708ms 738.798505ms 741.374694ms 743.09741ms 743.613378ms 743.872005ms 748.455434ms 748.822754ms 749.834883ms 754.568212ms 758.739745ms 759.049217ms 759.961062ms 760.499242ms 760.615855ms 761.50965ms 762.802894ms 762.985652ms 765.124232ms 766.162772ms 766.661258ms 767.448813ms 770.429133ms 771.782657ms 772.327003ms 773.113901ms 774.603065ms 777.521596ms 778.276165ms 779.142978ms 780.071584ms 780.512921ms 783.326715ms 783.824764ms 785.286712ms 785.369247ms 786.224112ms 787.076657ms 789.17211ms 789.486882ms 789.716814ms 789.877235ms 792.37651ms 792.988557ms 794.372876ms 795.033882ms 795.178647ms 796.152772ms 796.55217ms 798.510626ms 798.957772ms 799.107102ms 800.509949ms 802.473381ms 803.673435ms 803.832926ms 808.659707ms 812.376227ms 813.475672ms 814.372074ms 814.547672ms 814.672085ms 815.086543ms 819.776452ms 821.891569ms 826.915894ms 828.1801ms 829.96792ms 835.001999ms 842.628588ms 862.662725ms 862.737058ms 865.383242ms 876.921166ms 884.485706ms 884.84499ms 887.75681ms 924.623632ms 924.639112ms 924.767755ms 924.943885ms 930.935374ms 936.120739ms 936.663414ms 947.237486ms 949.424362ms 962.856419ms] Apr 10 13:44:33.012: INFO: 50 %ile: 727.378558ms Apr 10 13:44:33.012: INFO: 90 %ile: 829.96792ms Apr 10 13:44:33.012: INFO: 99 %ile: 949.424362ms Apr 10 13:44:33.012: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:44:33.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-848" for this suite. Apr 10 13:45:05.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:45:05.166: INFO: namespace svc-latency-848 deletion completed in 32.134060715s • [SLOW TEST:46.280 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:45:05.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-41edaf44-ebf7-4bf2-aadf-b1fda00a88e9 STEP: Creating a pod to test consume secrets Apr 10 13:45:05.278: INFO: Waiting up to 5m0s for pod "pod-secrets-9c0b67ff-e306-43c8-9af6-72d25ab48a0f" in namespace "secrets-2629" to be "success or failure" Apr 10 13:45:05.296: INFO: Pod "pod-secrets-9c0b67ff-e306-43c8-9af6-72d25ab48a0f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.842057ms Apr 10 13:45:07.373: INFO: Pod "pod-secrets-9c0b67ff-e306-43c8-9af6-72d25ab48a0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095232737s Apr 10 13:45:09.378: INFO: Pod "pod-secrets-9c0b67ff-e306-43c8-9af6-72d25ab48a0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099891644s STEP: Saw pod success Apr 10 13:45:09.378: INFO: Pod "pod-secrets-9c0b67ff-e306-43c8-9af6-72d25ab48a0f" satisfied condition "success or failure" Apr 10 13:45:09.381: INFO: Trying to get logs from node iruya-worker pod pod-secrets-9c0b67ff-e306-43c8-9af6-72d25ab48a0f container secret-env-test: STEP: delete the pod Apr 10 13:45:09.419: INFO: Waiting for pod pod-secrets-9c0b67ff-e306-43c8-9af6-72d25ab48a0f to disappear Apr 10 13:45:09.426: INFO: Pod pod-secrets-9c0b67ff-e306-43c8-9af6-72d25ab48a0f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:45:09.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2629" for this suite. Apr 10 13:45:15.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:45:15.522: INFO: namespace secrets-2629 deletion completed in 6.093321778s • [SLOW TEST:10.355 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:45:15.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 10 13:45:15.555: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix421642437/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:45:15.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4635" for this suite. Apr 10 13:45:21.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:45:21.740: INFO: namespace kubectl-4635 deletion completed in 6.097263818s • [SLOW TEST:6.217 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:45:21.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-88df4d2a-738a-408d-8040-036a2b0daec2 STEP: Creating a pod to test consume configMaps Apr 10 13:45:21.812: INFO: Waiting up to 5m0s for pod "pod-configmaps-b02010a7-dd7e-4a53-8329-83e4cd77f695" in namespace "configmap-5845" to be "success or failure" Apr 10 13:45:21.815: INFO: Pod "pod-configmaps-b02010a7-dd7e-4a53-8329-83e4cd77f695": Phase="Pending", Reason="", readiness=false. Elapsed: 3.192359ms Apr 10 13:45:23.819: INFO: Pod "pod-configmaps-b02010a7-dd7e-4a53-8329-83e4cd77f695": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006816111s Apr 10 13:45:25.823: INFO: Pod "pod-configmaps-b02010a7-dd7e-4a53-8329-83e4cd77f695": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011016169s STEP: Saw pod success Apr 10 13:45:25.823: INFO: Pod "pod-configmaps-b02010a7-dd7e-4a53-8329-83e4cd77f695" satisfied condition "success or failure" Apr 10 13:45:25.826: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b02010a7-dd7e-4a53-8329-83e4cd77f695 container configmap-volume-test: STEP: delete the pod Apr 10 13:45:25.884: INFO: Waiting for pod pod-configmaps-b02010a7-dd7e-4a53-8329-83e4cd77f695 to disappear Apr 10 13:45:25.887: INFO: Pod pod-configmaps-b02010a7-dd7e-4a53-8329-83e4cd77f695 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:45:25.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5845" for this suite. Apr 10 13:45:31.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:45:31.968: INFO: namespace configmap-5845 deletion completed in 6.077664654s • [SLOW TEST:10.228 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:45:31.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 10 13:45:36.103: INFO: Pod pod-hostip-bf4cd20f-99a6-4388-b4e5-9ae16961dbc9 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:45:36.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5576" for this suite. Apr 10 13:45:58.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:45:58.199: INFO: namespace pods-5576 deletion completed in 22.092005334s • [SLOW TEST:26.231 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:45:58.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 10 13:46:08.321: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2186 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:46:08.321: INFO: >>> kubeConfig: /root/.kube/config I0410 13:46:08.361571 6 log.go:172] (0xc0012da840) (0xc0013512c0) Create stream I0410 13:46:08.361603 6 log.go:172] (0xc0012da840) (0xc0013512c0) Stream added, broadcasting: 1 I0410 13:46:08.363819 6 log.go:172] (0xc0012da840) Reply frame received for 1 I0410 13:46:08.363867 6 log.go:172] (0xc0012da840) (0xc001351400) Create stream I0410 13:46:08.363879 6 log.go:172] (0xc0012da840) (0xc001351400) Stream added, broadcasting: 3 I0410 13:46:08.365003 6 log.go:172] (0xc0012da840) Reply frame received for 3 I0410 13:46:08.365063 6 log.go:172] (0xc0012da840) (0xc0013515e0) Create stream I0410 13:46:08.365090 6 log.go:172] (0xc0012da840) (0xc0013515e0) Stream added, broadcasting: 5 I0410 13:46:08.366534 6 log.go:172] (0xc0012da840) Reply frame received for 5 I0410 13:46:08.430420 6 log.go:172] (0xc0012da840) Data frame received for 5 I0410 13:46:08.430456 6 log.go:172] (0xc0013515e0) (5) Data frame handling I0410 13:46:08.430482 6 log.go:172] (0xc0012da840) Data frame received for 3 I0410 13:46:08.430501 6 log.go:172] (0xc001351400) (3) Data frame handling I0410 13:46:08.430516 6 log.go:172] (0xc001351400) (3) Data frame sent I0410 13:46:08.430529 6 log.go:172] (0xc0012da840) Data frame received for 3 I0410 13:46:08.430541 6 log.go:172] (0xc001351400) (3) Data frame handling I0410 13:46:08.432311 6 log.go:172] (0xc0012da840) Data frame received for 1 I0410 13:46:08.432391 6 log.go:172] (0xc0013512c0) (1) Data frame handling I0410 13:46:08.432447 6 log.go:172] (0xc0013512c0) (1) Data frame sent I0410 13:46:08.432480 6 log.go:172] (0xc0012da840) (0xc0013512c0) Stream removed, broadcasting: 1 I0410 13:46:08.432523 6 log.go:172] (0xc0012da840) Go away received I0410 13:46:08.432635 6 log.go:172] (0xc0012da840) (0xc0013512c0) Stream removed, broadcasting: 1 I0410 13:46:08.432666 6 log.go:172] (0xc0012da840) (0xc001351400) Stream removed, broadcasting: 3 I0410 13:46:08.432685 6 log.go:172] (0xc0012da840) (0xc0013515e0) Stream removed, broadcasting: 5 Apr 10 13:46:08.432: INFO: Exec stderr: "" Apr 10 13:46:08.432: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2186 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:46:08.432: INFO: >>> kubeConfig: /root/.kube/config I0410 13:46:08.503003 6 log.go:172] (0xc000bb8fd0) (0xc0027f0d20) Create stream I0410 13:46:08.503071 6 log.go:172] (0xc000bb8fd0) (0xc0027f0d20) Stream added, broadcasting: 1 I0410 13:46:08.505712 6 log.go:172] (0xc000bb8fd0) Reply frame received for 1 I0410 13:46:08.505761 6 log.go:172] (0xc000bb8fd0) (0xc0027f0dc0) Create stream I0410 13:46:08.505773 6 log.go:172] (0xc000bb8fd0) (0xc0027f0dc0) Stream added, broadcasting: 3 I0410 13:46:08.506533 6 log.go:172] (0xc000bb8fd0) Reply frame received for 3 I0410 13:46:08.506572 6 log.go:172] (0xc000bb8fd0) (0xc001351680) Create stream I0410 13:46:08.506582 6 log.go:172] (0xc000bb8fd0) (0xc001351680) Stream added, broadcasting: 5 I0410 13:46:08.521511 6 log.go:172] (0xc000bb8fd0) Reply frame received for 5 I0410 13:46:08.577562 6 log.go:172] (0xc000bb8fd0) Data frame received for 5 I0410 13:46:08.577601 6 log.go:172] (0xc001351680) (5) Data frame handling I0410 13:46:08.577914 6 log.go:172] (0xc000bb8fd0) Data frame received for 3 I0410 13:46:08.577940 6 log.go:172] (0xc0027f0dc0) (3) Data frame handling I0410 13:46:08.577967 6 log.go:172] (0xc0027f0dc0) (3) Data frame sent I0410 13:46:08.577981 6 log.go:172] (0xc000bb8fd0) Data frame received for 3 I0410 13:46:08.577995 6 log.go:172] (0xc0027f0dc0) (3) Data frame handling I0410 13:46:08.579435 6 log.go:172] (0xc000bb8fd0) Data frame received for 1 I0410 13:46:08.579466 6 log.go:172] (0xc0027f0d20) (1) Data frame handling I0410 13:46:08.579495 6 log.go:172] (0xc0027f0d20) (1) Data frame sent I0410 13:46:08.579526 6 log.go:172] (0xc000bb8fd0) (0xc0027f0d20) Stream removed, broadcasting: 1 I0410 13:46:08.579546 6 log.go:172] (0xc000bb8fd0) Go away received I0410 13:46:08.579649 6 log.go:172] (0xc000bb8fd0) (0xc0027f0d20) Stream removed, broadcasting: 1 I0410 13:46:08.579670 6 log.go:172] (0xc000bb8fd0) (0xc0027f0dc0) Stream removed, broadcasting: 3 I0410 13:46:08.579679 6 log.go:172] (0xc000bb8fd0) (0xc001351680) Stream removed, broadcasting: 5 Apr 10 13:46:08.579: INFO: Exec stderr: "" Apr 10 13:46:08.579: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2186 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:46:08.579: INFO: >>> kubeConfig: /root/.kube/config I0410 13:46:08.608680 6 log.go:172] (0xc000bb9ef0) (0xc0027f10e0) Create stream I0410 13:46:08.608703 6 log.go:172] (0xc000bb9ef0) (0xc0027f10e0) Stream added, broadcasting: 1 I0410 13:46:08.610983 6 log.go:172] (0xc000bb9ef0) Reply frame received for 1 I0410 13:46:08.611060 6 log.go:172] (0xc000bb9ef0) (0xc002b3f0e0) Create stream I0410 13:46:08.611076 6 log.go:172] (0xc000bb9ef0) (0xc002b3f0e0) Stream added, broadcasting: 3 I0410 13:46:08.612115 6 log.go:172] (0xc000bb9ef0) Reply frame received for 3 I0410 13:46:08.612155 6 log.go:172] (0xc000bb9ef0) (0xc0027f1180) Create stream I0410 13:46:08.612163 6 log.go:172] (0xc000bb9ef0) (0xc0027f1180) Stream added, broadcasting: 5 I0410 13:46:08.613091 6 log.go:172] (0xc000bb9ef0) Reply frame received for 5 I0410 13:46:08.681227 6 log.go:172] (0xc000bb9ef0) Data frame received for 5 I0410 13:46:08.681261 6 log.go:172] (0xc0027f1180) (5) Data frame handling I0410 13:46:08.681295 6 log.go:172] (0xc000bb9ef0) Data frame received for 3 I0410 13:46:08.681308 6 log.go:172] (0xc002b3f0e0) (3) Data frame handling I0410 13:46:08.681328 6 log.go:172] (0xc002b3f0e0) (3) Data frame sent I0410 13:46:08.681370 6 log.go:172] (0xc000bb9ef0) Data frame received for 3 I0410 13:46:08.681391 6 log.go:172] (0xc002b3f0e0) (3) Data frame handling I0410 13:46:08.682902 6 log.go:172] (0xc000bb9ef0) Data frame received for 1 I0410 13:46:08.682919 6 log.go:172] (0xc0027f10e0) (1) Data frame handling I0410 13:46:08.682926 6 log.go:172] (0xc0027f10e0) (1) Data frame sent I0410 13:46:08.682937 6 log.go:172] (0xc000bb9ef0) (0xc0027f10e0) Stream removed, broadcasting: 1 I0410 13:46:08.683008 6 log.go:172] (0xc000bb9ef0) Go away received I0410 13:46:08.683087 6 log.go:172] (0xc000bb9ef0) (0xc0027f10e0) Stream removed, broadcasting: 1 I0410 13:46:08.683143 6 log.go:172] (0xc000bb9ef0) (0xc002b3f0e0) Stream removed, broadcasting: 3 I0410 13:46:08.683177 6 log.go:172] (0xc000bb9ef0) (0xc0027f1180) Stream removed, broadcasting: 5 Apr 10 13:46:08.683: INFO: Exec stderr: "" Apr 10 13:46:08.683: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2186 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:46:08.683: INFO: >>> kubeConfig: /root/.kube/config I0410 13:46:08.720262 6 log.go:172] (0xc0012db760) (0xc001351860) Create stream I0410 13:46:08.720309 6 log.go:172] (0xc0012db760) (0xc001351860) Stream added, broadcasting: 1 I0410 13:46:08.722974 6 log.go:172] (0xc0012db760) Reply frame received for 1 I0410 13:46:08.723011 6 log.go:172] (0xc0012db760) (0xc0027f1360) Create stream I0410 13:46:08.723028 6 log.go:172] (0xc0012db760) (0xc0027f1360) Stream added, broadcasting: 3 I0410 13:46:08.724397 6 log.go:172] (0xc0012db760) Reply frame received for 3 I0410 13:46:08.724440 6 log.go:172] (0xc0012db760) (0xc002c87e00) Create stream I0410 13:46:08.724452 6 log.go:172] (0xc0012db760) (0xc002c87e00) Stream added, broadcasting: 5 I0410 13:46:08.725673 6 log.go:172] (0xc0012db760) Reply frame received for 5 I0410 13:46:08.786726 6 log.go:172] (0xc0012db760) Data frame received for 5 I0410 13:46:08.786761 6 log.go:172] (0xc002c87e00) (5) Data frame handling I0410 13:46:08.786784 6 log.go:172] (0xc0012db760) Data frame received for 3 I0410 13:46:08.786807 6 log.go:172] (0xc0027f1360) (3) Data frame handling I0410 13:46:08.786824 6 log.go:172] (0xc0027f1360) (3) Data frame sent I0410 13:46:08.786871 6 log.go:172] (0xc0012db760) Data frame received for 3 I0410 13:46:08.786884 6 log.go:172] (0xc0027f1360) (3) Data frame handling I0410 13:46:08.788038 6 log.go:172] (0xc0012db760) Data frame received for 1 I0410 13:46:08.788064 6 log.go:172] (0xc001351860) (1) Data frame handling I0410 13:46:08.788089 6 log.go:172] (0xc001351860) (1) Data frame sent I0410 13:46:08.788108 6 log.go:172] (0xc0012db760) (0xc001351860) Stream removed, broadcasting: 1 I0410 13:46:08.788129 6 log.go:172] (0xc0012db760) Go away received I0410 13:46:08.788216 6 log.go:172] (0xc0012db760) (0xc001351860) Stream removed, broadcasting: 1 I0410 13:46:08.788239 6 log.go:172] (0xc0012db760) (0xc0027f1360) Stream removed, broadcasting: 3 I0410 13:46:08.788253 6 log.go:172] (0xc0012db760) (0xc002c87e00) Stream removed, broadcasting: 5 Apr 10 13:46:08.788: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 10 13:46:08.788: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2186 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:46:08.788: INFO: >>> kubeConfig: /root/.kube/config I0410 13:46:08.817770 6 log.go:172] (0xc000e3d290) (0xc002b3f400) Create stream I0410 13:46:08.817801 6 log.go:172] (0xc000e3d290) (0xc002b3f400) Stream added, broadcasting: 1 I0410 13:46:08.820139 6 log.go:172] (0xc000e3d290) Reply frame received for 1 I0410 13:46:08.820178 6 log.go:172] (0xc000e3d290) (0xc002b3f4a0) Create stream I0410 13:46:08.820194 6 log.go:172] (0xc000e3d290) (0xc002b3f4a0) Stream added, broadcasting: 3 I0410 13:46:08.821353 6 log.go:172] (0xc000e3d290) Reply frame received for 3 I0410 13:46:08.821394 6 log.go:172] (0xc000e3d290) (0xc002b3f540) Create stream I0410 13:46:08.821406 6 log.go:172] (0xc000e3d290) (0xc002b3f540) Stream added, broadcasting: 5 I0410 13:46:08.822284 6 log.go:172] (0xc000e3d290) Reply frame received for 5 I0410 13:46:08.872506 6 log.go:172] (0xc000e3d290) Data frame received for 5 I0410 13:46:08.872549 6 log.go:172] (0xc002b3f540) (5) Data frame handling I0410 13:46:08.872577 6 log.go:172] (0xc000e3d290) Data frame received for 3 I0410 13:46:08.872596 6 log.go:172] (0xc002b3f4a0) (3) Data frame handling I0410 13:46:08.872610 6 log.go:172] (0xc002b3f4a0) (3) Data frame sent I0410 13:46:08.872623 6 log.go:172] (0xc000e3d290) Data frame received for 3 I0410 13:46:08.872634 6 log.go:172] (0xc002b3f4a0) (3) Data frame handling I0410 13:46:08.874418 6 log.go:172] (0xc000e3d290) Data frame received for 1 I0410 13:46:08.874463 6 log.go:172] (0xc002b3f400) (1) Data frame handling I0410 13:46:08.874494 6 log.go:172] (0xc002b3f400) (1) Data frame sent I0410 13:46:08.874517 6 log.go:172] (0xc000e3d290) (0xc002b3f400) Stream removed, broadcasting: 1 I0410 13:46:08.874542 6 log.go:172] (0xc000e3d290) Go away received I0410 13:46:08.874733 6 log.go:172] (0xc000e3d290) (0xc002b3f400) Stream removed, broadcasting: 1 I0410 13:46:08.874774 6 log.go:172] (0xc000e3d290) (0xc002b3f4a0) Stream removed, broadcasting: 3 I0410 13:46:08.874786 6 log.go:172] (0xc000e3d290) (0xc002b3f540) Stream removed, broadcasting: 5 Apr 10 13:46:08.874: INFO: Exec stderr: "" Apr 10 13:46:08.874: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2186 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:46:08.874: INFO: >>> kubeConfig: /root/.kube/config I0410 13:46:08.914344 6 log.go:172] (0xc002ca51e0) (0xc0027f1680) Create stream I0410 13:46:08.914458 6 log.go:172] (0xc002ca51e0) (0xc0027f1680) Stream added, broadcasting: 1 I0410 13:46:08.917104 6 log.go:172] (0xc002ca51e0) Reply frame received for 1 I0410 13:46:08.917243 6 log.go:172] (0xc002ca51e0) (0xc0027f1720) Create stream I0410 13:46:08.917252 6 log.go:172] (0xc002ca51e0) (0xc0027f1720) Stream added, broadcasting: 3 I0410 13:46:08.918211 6 log.go:172] (0xc002ca51e0) Reply frame received for 3 I0410 13:46:08.918246 6 log.go:172] (0xc002ca51e0) (0xc000519f40) Create stream I0410 13:46:08.918257 6 log.go:172] (0xc002ca51e0) (0xc000519f40) Stream added, broadcasting: 5 I0410 13:46:08.918999 6 log.go:172] (0xc002ca51e0) Reply frame received for 5 I0410 13:46:08.976346 6 log.go:172] (0xc002ca51e0) Data frame received for 5 I0410 13:46:08.976381 6 log.go:172] (0xc000519f40) (5) Data frame handling I0410 13:46:08.976438 6 log.go:172] (0xc002ca51e0) Data frame received for 3 I0410 13:46:08.976480 6 log.go:172] (0xc0027f1720) (3) Data frame handling I0410 13:46:08.976505 6 log.go:172] (0xc0027f1720) (3) Data frame sent I0410 13:46:08.976520 6 log.go:172] (0xc002ca51e0) Data frame received for 3 I0410 13:46:08.976538 6 log.go:172] (0xc0027f1720) (3) Data frame handling I0410 13:46:08.977955 6 log.go:172] (0xc002ca51e0) Data frame received for 1 I0410 13:46:08.978008 6 log.go:172] (0xc0027f1680) (1) Data frame handling I0410 13:46:08.978039 6 log.go:172] (0xc0027f1680) (1) Data frame sent I0410 13:46:08.978062 6 log.go:172] (0xc002ca51e0) (0xc0027f1680) Stream removed, broadcasting: 1 I0410 13:46:08.978214 6 log.go:172] (0xc002ca51e0) (0xc0027f1680) Stream removed, broadcasting: 1 I0410 13:46:08.978231 6 log.go:172] (0xc002ca51e0) (0xc0027f1720) Stream removed, broadcasting: 3 I0410 13:46:08.978304 6 log.go:172] (0xc002ca51e0) Go away received I0410 13:46:08.978343 6 log.go:172] (0xc002ca51e0) (0xc000519f40) Stream removed, broadcasting: 5 Apr 10 13:46:08.978: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 10 13:46:08.978: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2186 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:46:08.978: INFO: >>> kubeConfig: /root/.kube/config I0410 13:46:09.015005 6 log.go:172] (0xc0030ddc30) (0xc002b9c280) Create stream I0410 13:46:09.015025 6 log.go:172] (0xc0030ddc30) (0xc002b9c280) Stream added, broadcasting: 1 I0410 13:46:09.021924 6 log.go:172] (0xc0030ddc30) Reply frame received for 1 I0410 13:46:09.021988 6 log.go:172] (0xc0030ddc30) (0xc002b3f5e0) Create stream I0410 13:46:09.022006 6 log.go:172] (0xc0030ddc30) (0xc002b3f5e0) Stream added, broadcasting: 3 I0410 13:46:09.023273 6 log.go:172] (0xc0030ddc30) Reply frame received for 3 I0410 13:46:09.023336 6 log.go:172] (0xc0030ddc30) (0xc002c87ea0) Create stream I0410 13:46:09.023371 6 log.go:172] (0xc0030ddc30) (0xc002c87ea0) Stream added, broadcasting: 5 I0410 13:46:09.025471 6 log.go:172] (0xc0030ddc30) Reply frame received for 5 I0410 13:46:09.083502 6 log.go:172] (0xc0030ddc30) Data frame received for 5 I0410 13:46:09.083538 6 log.go:172] (0xc002c87ea0) (5) Data frame handling I0410 13:46:09.083559 6 log.go:172] (0xc0030ddc30) Data frame received for 3 I0410 13:46:09.083568 6 log.go:172] (0xc002b3f5e0) (3) Data frame handling I0410 13:46:09.083579 6 log.go:172] (0xc002b3f5e0) (3) Data frame sent I0410 13:46:09.083588 6 log.go:172] (0xc0030ddc30) Data frame received for 3 I0410 13:46:09.083596 6 log.go:172] (0xc002b3f5e0) (3) Data frame handling I0410 13:46:09.085090 6 log.go:172] (0xc0030ddc30) Data frame received for 1 I0410 13:46:09.085192 6 log.go:172] (0xc002b9c280) (1) Data frame handling I0410 13:46:09.085210 6 log.go:172] (0xc002b9c280) (1) Data frame sent I0410 13:46:09.085220 6 log.go:172] (0xc0030ddc30) (0xc002b9c280) Stream removed, broadcasting: 1 I0410 13:46:09.085243 6 log.go:172] (0xc0030ddc30) Go away received I0410 13:46:09.085366 6 log.go:172] (0xc0030ddc30) (0xc002b9c280) Stream removed, broadcasting: 1 I0410 13:46:09.085379 6 log.go:172] (0xc0030ddc30) (0xc002b3f5e0) Stream removed, broadcasting: 3 I0410 13:46:09.085383 6 log.go:172] (0xc0030ddc30) (0xc002c87ea0) Stream removed, broadcasting: 5 Apr 10 13:46:09.085: INFO: Exec stderr: "" Apr 10 13:46:09.085: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2186 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:46:09.085: INFO: >>> kubeConfig: /root/.kube/config I0410 13:46:09.113556 6 log.go:172] (0xc002f82420) (0xc0017ca3c0) Create stream I0410 13:46:09.113583 6 log.go:172] (0xc002f82420) (0xc0017ca3c0) Stream added, broadcasting: 1 I0410 13:46:09.116131 6 log.go:172] (0xc002f82420) Reply frame received for 1 I0410 13:46:09.116170 6 log.go:172] (0xc002f82420) (0xc001351900) Create stream I0410 13:46:09.116188 6 log.go:172] (0xc002f82420) (0xc001351900) Stream added, broadcasting: 3 I0410 13:46:09.117497 6 log.go:172] (0xc002f82420) Reply frame received for 3 I0410 13:46:09.117536 6 log.go:172] (0xc002f82420) (0xc001351ae0) Create stream I0410 13:46:09.117556 6 log.go:172] (0xc002f82420) (0xc001351ae0) Stream added, broadcasting: 5 I0410 13:46:09.118602 6 log.go:172] (0xc002f82420) Reply frame received for 5 I0410 13:46:09.172302 6 log.go:172] (0xc002f82420) Data frame received for 3 I0410 13:46:09.172336 6 log.go:172] (0xc001351900) (3) Data frame handling I0410 13:46:09.172346 6 log.go:172] (0xc001351900) (3) Data frame sent I0410 13:46:09.172351 6 log.go:172] (0xc002f82420) Data frame received for 3 I0410 13:46:09.172361 6 log.go:172] (0xc001351900) (3) Data frame handling I0410 13:46:09.172378 6 log.go:172] (0xc002f82420) Data frame received for 5 I0410 13:46:09.172393 6 log.go:172] (0xc001351ae0) (5) Data frame handling I0410 13:46:09.173957 6 log.go:172] (0xc002f82420) Data frame received for 1 I0410 13:46:09.173978 6 log.go:172] (0xc0017ca3c0) (1) Data frame handling I0410 13:46:09.173996 6 log.go:172] (0xc0017ca3c0) (1) Data frame sent I0410 13:46:09.174011 6 log.go:172] (0xc002f82420) (0xc0017ca3c0) Stream removed, broadcasting: 1 I0410 13:46:09.174070 6 log.go:172] (0xc002f82420) Go away received I0410 13:46:09.174123 6 log.go:172] (0xc002f82420) (0xc0017ca3c0) Stream removed, broadcasting: 1 I0410 13:46:09.174149 6 log.go:172] (0xc002f82420) (0xc001351900) Stream removed, broadcasting: 3 I0410 13:46:09.174178 6 log.go:172] (0xc002f82420) (0xc001351ae0) Stream removed, broadcasting: 5 Apr 10 13:46:09.174: INFO: Exec stderr: "" Apr 10 13:46:09.174: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2186 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:46:09.174: INFO: >>> kubeConfig: /root/.kube/config I0410 13:46:09.203901 6 log.go:172] (0xc0033fe630) (0xc001351d60) Create stream I0410 13:46:09.203924 6 log.go:172] (0xc0033fe630) (0xc001351d60) Stream added, broadcasting: 1 I0410 13:46:09.206491 6 log.go:172] (0xc0033fe630) Reply frame received for 1 I0410 13:46:09.206532 6 log.go:172] (0xc0033fe630) (0xc0017ca500) Create stream I0410 13:46:09.206548 6 log.go:172] (0xc0033fe630) (0xc0017ca500) Stream added, broadcasting: 3 I0410 13:46:09.207467 6 log.go:172] (0xc0033fe630) Reply frame received for 3 I0410 13:46:09.207500 6 log.go:172] (0xc0033fe630) (0xc0017ca5a0) Create stream I0410 13:46:09.207513 6 log.go:172] (0xc0033fe630) (0xc0017ca5a0) Stream added, broadcasting: 5 I0410 13:46:09.208373 6 log.go:172] (0xc0033fe630) Reply frame received for 5 I0410 13:46:09.279500 6 log.go:172] (0xc0033fe630) Data frame received for 5 I0410 13:46:09.279543 6 log.go:172] (0xc0017ca5a0) (5) Data frame handling I0410 13:46:09.279572 6 log.go:172] (0xc0033fe630) Data frame received for 3 I0410 13:46:09.279581 6 log.go:172] (0xc0017ca500) (3) Data frame handling I0410 13:46:09.279593 6 log.go:172] (0xc0017ca500) (3) Data frame sent I0410 13:46:09.279602 6 log.go:172] (0xc0033fe630) Data frame received for 3 I0410 13:46:09.279610 6 log.go:172] (0xc0017ca500) (3) Data frame handling I0410 13:46:09.280502 6 log.go:172] (0xc0033fe630) Data frame received for 1 I0410 13:46:09.280515 6 log.go:172] (0xc001351d60) (1) Data frame handling I0410 13:46:09.280523 6 log.go:172] (0xc001351d60) (1) Data frame sent I0410 13:46:09.280535 6 log.go:172] (0xc0033fe630) (0xc001351d60) Stream removed, broadcasting: 1 I0410 13:46:09.280611 6 log.go:172] (0xc0033fe630) (0xc001351d60) Stream removed, broadcasting: 1 I0410 13:46:09.280626 6 log.go:172] (0xc0033fe630) (0xc0017ca500) Stream removed, broadcasting: 3 I0410 13:46:09.280681 6 log.go:172] (0xc0033fe630) Go away received I0410 13:46:09.280803 6 log.go:172] (0xc0033fe630) (0xc0017ca5a0) Stream removed, broadcasting: 5 Apr 10 13:46:09.280: INFO: Exec stderr: "" Apr 10 13:46:09.280: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2186 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 13:46:09.280: INFO: >>> kubeConfig: /root/.kube/config I0410 13:46:09.324768 6 log.go:172] (0xc0033ff130) (0xc0011ae500) Create stream I0410 13:46:09.324815 6 log.go:172] (0xc0033ff130) (0xc0011ae500) Stream added, broadcasting: 1 I0410 13:46:09.333603 6 log.go:172] (0xc0033ff130) Reply frame received for 1 I0410 13:46:09.333638 6 log.go:172] (0xc0033ff130) (0xc002bda000) Create stream I0410 13:46:09.333648 6 log.go:172] (0xc0033ff130) (0xc002bda000) Stream added, broadcasting: 3 I0410 13:46:09.334374 6 log.go:172] (0xc0033ff130) Reply frame received for 3 I0410 13:46:09.334403 6 log.go:172] (0xc0033ff130) (0xc002bda0a0) Create stream I0410 13:46:09.334411 6 log.go:172] (0xc0033ff130) (0xc002bda0a0) Stream added, broadcasting: 5 I0410 13:46:09.335014 6 log.go:172] (0xc0033ff130) Reply frame received for 5 I0410 13:46:09.400251 6 log.go:172] (0xc0033ff130) Data frame received for 3 I0410 13:46:09.400315 6 log.go:172] (0xc002bda000) (3) Data frame handling I0410 13:46:09.400336 6 log.go:172] (0xc002bda000) (3) Data frame sent I0410 13:46:09.400350 6 log.go:172] (0xc0033ff130) Data frame received for 3 I0410 13:46:09.400361 6 log.go:172] (0xc002bda000) (3) Data frame handling I0410 13:46:09.400395 6 log.go:172] (0xc0033ff130) Data frame received for 5 I0410 13:46:09.400418 6 log.go:172] (0xc002bda0a0) (5) Data frame handling I0410 13:46:09.402766 6 log.go:172] (0xc0033ff130) Data frame received for 1 I0410 13:46:09.402796 6 log.go:172] (0xc0011ae500) (1) Data frame handling I0410 13:46:09.402808 6 log.go:172] (0xc0011ae500) (1) Data frame sent I0410 13:46:09.402828 6 log.go:172] (0xc0033ff130) (0xc0011ae500) Stream removed, broadcasting: 1 I0410 13:46:09.402852 6 log.go:172] (0xc0033ff130) Go away received I0410 13:46:09.402921 6 log.go:172] (0xc0033ff130) (0xc0011ae500) Stream removed, broadcasting: 1 I0410 13:46:09.402950 6 log.go:172] (0xc0033ff130) (0xc002bda000) Stream removed, broadcasting: 3 I0410 13:46:09.402969 6 log.go:172] (0xc0033ff130) (0xc002bda0a0) Stream removed, broadcasting: 5 Apr 10 13:46:09.402: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:46:09.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2186" for this suite. Apr 10 13:46:55.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:46:55.546: INFO: namespace e2e-kubelet-etc-hosts-2186 deletion completed in 46.139771112s • [SLOW TEST:57.347 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:46:55.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 10 13:46:55.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6064' Apr 10 13:46:58.198: INFO: stderr: "" Apr 10 13:46:58.198: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 10 13:46:59.202: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:46:59.202: INFO: Found 0 / 1 Apr 10 13:47:00.203: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:47:00.203: INFO: Found 0 / 1 Apr 10 13:47:01.203: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:47:01.203: INFO: Found 0 / 1 Apr 10 13:47:02.203: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:47:02.203: INFO: Found 1 / 1 Apr 10 13:47:02.203: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 10 13:47:02.206: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:47:02.206: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 10 13:47:02.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-9j5lw --namespace=kubectl-6064 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 10 13:47:02.305: INFO: stderr: "" Apr 10 13:47:02.305: INFO: stdout: "pod/redis-master-9j5lw patched\n" STEP: checking annotations Apr 10 13:47:02.332: INFO: Selector matched 1 pods for map[app:redis] Apr 10 13:47:02.332: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:47:02.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6064" for this suite. Apr 10 13:47:24.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:47:24.432: INFO: namespace kubectl-6064 deletion completed in 22.095775789s • [SLOW TEST:28.884 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:47:24.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-9514/secret-test-e49b2df9-3580-47f9-9486-c7d860447b4c STEP: Creating a pod to test consume secrets Apr 10 13:47:24.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5a7ea9a-aa66-4d41-ac97-93b37097a635" in namespace "secrets-9514" to be "success or failure" Apr 10 13:47:24.519: INFO: Pod "pod-configmaps-a5a7ea9a-aa66-4d41-ac97-93b37097a635": Phase="Pending", Reason="", readiness=false. Elapsed: 3.801479ms Apr 10 13:47:26.523: INFO: Pod "pod-configmaps-a5a7ea9a-aa66-4d41-ac97-93b37097a635": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007520202s Apr 10 13:47:28.527: INFO: Pod "pod-configmaps-a5a7ea9a-aa66-4d41-ac97-93b37097a635": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011163293s STEP: Saw pod success Apr 10 13:47:28.527: INFO: Pod "pod-configmaps-a5a7ea9a-aa66-4d41-ac97-93b37097a635" satisfied condition "success or failure" Apr 10 13:47:28.529: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a5a7ea9a-aa66-4d41-ac97-93b37097a635 container env-test: STEP: delete the pod Apr 10 13:47:28.551: INFO: Waiting for pod pod-configmaps-a5a7ea9a-aa66-4d41-ac97-93b37097a635 to disappear Apr 10 13:47:28.555: INFO: Pod pod-configmaps-a5a7ea9a-aa66-4d41-ac97-93b37097a635 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:47:28.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9514" for this suite. Apr 10 13:47:34.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:47:34.652: INFO: namespace secrets-9514 deletion completed in 6.092581861s • [SLOW TEST:10.220 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:47:34.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7166, will wait for the garbage collector to delete the pods Apr 10 13:47:38.766: INFO: Deleting Job.batch foo took: 6.553862ms Apr 10 13:47:39.066: INFO: Terminating Job.batch foo pods took: 300.280913ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:48:22.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7166" for this suite. Apr 10 13:48:28.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:48:28.371: INFO: namespace job-7166 deletion completed in 6.097797501s • [SLOW TEST:53.718 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:48:28.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0410 13:48:39.849781 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 13:48:39.849: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:48:39.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9190" for this suite. Apr 10 13:48:47.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:48:47.931: INFO: namespace gc-9190 deletion completed in 8.077979146s • [SLOW TEST:19.560 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:48:47.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 13:48:48.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c8a76b1-9ea2-498d-964b-03bd6a2680ca" in namespace "projected-5062" to be "success or failure" Apr 10 13:48:48.010: INFO: Pod "downwardapi-volume-5c8a76b1-9ea2-498d-964b-03bd6a2680ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.349107ms Apr 10 13:48:50.014: INFO: Pod "downwardapi-volume-5c8a76b1-9ea2-498d-964b-03bd6a2680ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006998243s Apr 10 13:48:52.018: INFO: Pod "downwardapi-volume-5c8a76b1-9ea2-498d-964b-03bd6a2680ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011433613s STEP: Saw pod success Apr 10 13:48:52.018: INFO: Pod "downwardapi-volume-5c8a76b1-9ea2-498d-964b-03bd6a2680ca" satisfied condition "success or failure" Apr 10 13:48:52.021: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-5c8a76b1-9ea2-498d-964b-03bd6a2680ca container client-container: STEP: delete the pod Apr 10 13:48:52.042: INFO: Waiting for pod downwardapi-volume-5c8a76b1-9ea2-498d-964b-03bd6a2680ca to disappear Apr 10 13:48:52.046: INFO: Pod downwardapi-volume-5c8a76b1-9ea2-498d-964b-03bd6a2680ca no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:48:52.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5062" for this suite. Apr 10 13:48:58.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:48:58.180: INFO: namespace projected-5062 deletion completed in 6.130224046s • [SLOW TEST:10.248 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:48:58.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 13:48:58.280: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 10 13:49:03.284: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 10 13:49:03.284: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 10 13:49:05.288: INFO: Creating deployment "test-rollover-deployment" Apr 10 13:49:05.298: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 10 13:49:07.306: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 10 13:49:07.313: INFO: Ensure that both replica sets have 1 created replica Apr 10 13:49:07.319: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 10 13:49:07.325: INFO: Updating deployment test-rollover-deployment Apr 10 13:49:07.325: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 10 13:49:09.355: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 10 13:49:09.362: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 10 13:49:09.368: INFO: all replica sets need to contain the pod-template-hash label Apr 10 13:49:09.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123347, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 13:49:11.378: INFO: all replica sets need to contain the pod-template-hash label Apr 10 13:49:11.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123350, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 13:49:13.377: INFO: all replica sets need to contain the pod-template-hash label Apr 10 13:49:13.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123350, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 13:49:15.376: INFO: all replica sets need to contain the pod-template-hash label Apr 10 13:49:15.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123350, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 13:49:17.377: INFO: all replica sets need to contain the pod-template-hash label Apr 10 13:49:17.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123350, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 13:49:19.375: INFO: all replica sets need to contain the pod-template-hash label Apr 10 13:49:19.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123350, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 13:49:21.376: INFO: Apr 10 13:49:21.376: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 10 13:49:21.384: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-4588,SelfLink:/apis/apps/v1/namespaces/deployment-4588/deployments/test-rollover-deployment,UID:7b07ab63-a7af-4d14-89e8-761f1c96ac55,ResourceVersion:4671080,Generation:2,CreationTimestamp:2020-04-10 13:49:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-10 13:49:05 +0000 UTC 2020-04-10 13:49:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-10 13:49:20 +0000 UTC 2020-04-10 13:49:05 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 10 13:49:21.388: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-4588,SelfLink:/apis/apps/v1/namespaces/deployment-4588/replicasets/test-rollover-deployment-854595fc44,UID:5ea9ecf8-94cd-4b93-906b-172d7ecc875f,ResourceVersion:4671069,Generation:2,CreationTimestamp:2020-04-10 13:49:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7b07ab63-a7af-4d14-89e8-761f1c96ac55 0xc00274ee17 0xc00274ee18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 10 13:49:21.388: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 10 13:49:21.388: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-4588,SelfLink:/apis/apps/v1/namespaces/deployment-4588/replicasets/test-rollover-controller,UID:84ea80f8-476a-40c9-9a33-084af99156b3,ResourceVersion:4671079,Generation:2,CreationTimestamp:2020-04-10 13:48:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7b07ab63-a7af-4d14-89e8-761f1c96ac55 0xc00274ec67 0xc00274ec68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 10 13:49:21.388: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-4588,SelfLink:/apis/apps/v1/namespaces/deployment-4588/replicasets/test-rollover-deployment-9b8b997cf,UID:98424bf1-5ece-4090-994a-6ea675edd796,ResourceVersion:4671033,Generation:2,CreationTimestamp:2020-04-10 13:49:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7b07ab63-a7af-4d14-89e8-761f1c96ac55 0xc00274f060 0xc00274f061}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 10 13:49:21.391: INFO: Pod "test-rollover-deployment-854595fc44-49gwq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-49gwq,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-4588,SelfLink:/api/v1/namespaces/deployment-4588/pods/test-rollover-deployment-854595fc44-49gwq,UID:7aff3da5-fae5-4ce4-af58-21c2c37b6279,ResourceVersion:4671046,Generation:0,CreationTimestamp:2020-04-10 13:49:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 5ea9ecf8-94cd-4b93-906b-172d7ecc875f 0xc00093ad67 0xc00093ad68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4fmlc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4fmlc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4fmlc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00093ae80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00093aea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:49:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:49:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:49:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 13:49:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.156,StartTime:2020-04-10 13:49:07 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-10 13:49:09 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://7c3fb0f44b0d7932e7f25f5702857672b893ac68c6f3a0a45fcfafd7b8511a11}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:49:21.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4588" for this suite. Apr 10 13:49:27.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:49:27.469: INFO: namespace deployment-4588 deletion completed in 6.075121628s • [SLOW TEST:29.288 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:49:27.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:50:27.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1854" for this suite. Apr 10 13:50:49.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:50:49.629: INFO: namespace container-probe-1854 deletion completed in 22.094157707s • [SLOW TEST:82.159 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:50:49.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 10 13:50:49.702: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 10 13:50:50.261: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 10 13:50:52.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123450, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123450, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123450, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722123450, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 13:50:55.158: INFO: Waited 625.121919ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:50:55.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7364" for this suite. Apr 10 13:51:01.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:51:01.860: INFO: namespace aggregator-7364 deletion completed in 6.260206014s • [SLOW TEST:12.230 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:51:01.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 10 13:51:01.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2400' Apr 10 13:51:02.014: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 10 13:51:02.014: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 10 13:51:04.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2400' Apr 10 13:51:04.156: INFO: stderr: "" Apr 10 13:51:04.156: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:51:04.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2400" for this suite. Apr 10 13:52:26.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:52:26.252: INFO: namespace kubectl-2400 deletion completed in 1m22.091702299s • [SLOW TEST:84.391 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:52:26.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 10 13:52:26.339: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5550,SelfLink:/api/v1/namespaces/watch-5550/configmaps/e2e-watch-test-watch-closed,UID:e6bdb3f0-d090-4c7d-88e4-c7845ad624fb,ResourceVersion:4671623,Generation:0,CreationTimestamp:2020-04-10 13:52:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 10 13:52:26.340: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5550,SelfLink:/api/v1/namespaces/watch-5550/configmaps/e2e-watch-test-watch-closed,UID:e6bdb3f0-d090-4c7d-88e4-c7845ad624fb,ResourceVersion:4671624,Generation:0,CreationTimestamp:2020-04-10 13:52:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 10 13:52:26.351: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5550,SelfLink:/api/v1/namespaces/watch-5550/configmaps/e2e-watch-test-watch-closed,UID:e6bdb3f0-d090-4c7d-88e4-c7845ad624fb,ResourceVersion:4671625,Generation:0,CreationTimestamp:2020-04-10 13:52:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 10 13:52:26.351: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5550,SelfLink:/api/v1/namespaces/watch-5550/configmaps/e2e-watch-test-watch-closed,UID:e6bdb3f0-d090-4c7d-88e4-c7845ad624fb,ResourceVersion:4671626,Generation:0,CreationTimestamp:2020-04-10 13:52:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:52:26.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5550" for this suite. Apr 10 13:52:32.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:52:32.438: INFO: namespace watch-5550 deletion completed in 6.082309723s • [SLOW TEST:6.185 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:52:32.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 13:52:36.571: INFO: Waiting up to 5m0s for pod "client-envvars-76996f17-9cb4-4b7a-bba3-af834355bc9c" in namespace "pods-5892" to be "success or failure" Apr 10 13:52:36.589: INFO: Pod "client-envvars-76996f17-9cb4-4b7a-bba3-af834355bc9c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.819498ms Apr 10 13:52:38.592: INFO: Pod "client-envvars-76996f17-9cb4-4b7a-bba3-af834355bc9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021287007s Apr 10 13:52:40.596: INFO: Pod "client-envvars-76996f17-9cb4-4b7a-bba3-af834355bc9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024799336s STEP: Saw pod success Apr 10 13:52:40.596: INFO: Pod "client-envvars-76996f17-9cb4-4b7a-bba3-af834355bc9c" satisfied condition "success or failure" Apr 10 13:52:40.598: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-76996f17-9cb4-4b7a-bba3-af834355bc9c container env3cont: STEP: delete the pod Apr 10 13:52:40.614: INFO: Waiting for pod client-envvars-76996f17-9cb4-4b7a-bba3-af834355bc9c to disappear Apr 10 13:52:40.619: INFO: Pod client-envvars-76996f17-9cb4-4b7a-bba3-af834355bc9c no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:52:40.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5892" for this suite. Apr 10 13:53:20.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:53:20.720: INFO: namespace pods-5892 deletion completed in 40.097645044s • [SLOW TEST:48.282 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:53:20.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 13:53:20.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-299899d9-ce7a-4d7f-a35e-00e8b128d499" in namespace "projected-6012" to be "success or failure" Apr 10 13:53:20.811: INFO: Pod "downwardapi-volume-299899d9-ce7a-4d7f-a35e-00e8b128d499": Phase="Pending", Reason="", readiness=false. Elapsed: 3.523348ms Apr 10 13:53:22.828: INFO: Pod "downwardapi-volume-299899d9-ce7a-4d7f-a35e-00e8b128d499": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021014151s Apr 10 13:53:24.835: INFO: Pod "downwardapi-volume-299899d9-ce7a-4d7f-a35e-00e8b128d499": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027431817s STEP: Saw pod success Apr 10 13:53:24.835: INFO: Pod "downwardapi-volume-299899d9-ce7a-4d7f-a35e-00e8b128d499" satisfied condition "success or failure" Apr 10 13:53:24.837: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-299899d9-ce7a-4d7f-a35e-00e8b128d499 container client-container: STEP: delete the pod Apr 10 13:53:24.868: INFO: Waiting for pod downwardapi-volume-299899d9-ce7a-4d7f-a35e-00e8b128d499 to disappear Apr 10 13:53:24.877: INFO: Pod downwardapi-volume-299899d9-ce7a-4d7f-a35e-00e8b128d499 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:53:24.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6012" for this suite. Apr 10 13:53:30.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:53:30.998: INFO: namespace projected-6012 deletion completed in 6.118229987s • [SLOW TEST:10.278 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:53:30.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3522 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3522 STEP: Creating statefulset with conflicting port in namespace statefulset-3522 STEP: Waiting until pod test-pod will start running in namespace statefulset-3522 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3522 Apr 10 13:53:35.099: INFO: Observed stateful pod in namespace: statefulset-3522, name: ss-0, uid: ba3a66cb-a957-4fff-8450-adcbdae8d1ac, status phase: Pending. Waiting for statefulset controller to delete. Apr 10 13:53:35.489: INFO: Observed stateful pod in namespace: statefulset-3522, name: ss-0, uid: ba3a66cb-a957-4fff-8450-adcbdae8d1ac, status phase: Failed. Waiting for statefulset controller to delete. Apr 10 13:53:35.500: INFO: Observed stateful pod in namespace: statefulset-3522, name: ss-0, uid: ba3a66cb-a957-4fff-8450-adcbdae8d1ac, status phase: Failed. Waiting for statefulset controller to delete. Apr 10 13:53:35.548: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3522 STEP: Removing pod with conflicting port in namespace statefulset-3522 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3522 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 10 13:53:39.615: INFO: Deleting all statefulset in ns statefulset-3522 Apr 10 13:53:39.620: INFO: Scaling statefulset ss to 0 Apr 10 13:53:59.634: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 13:53:59.638: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:53:59.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3522" for this suite. Apr 10 13:54:05.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:54:05.740: INFO: namespace statefulset-3522 deletion completed in 6.084561524s • [SLOW TEST:34.742 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:54:05.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 10 13:54:05.825: INFO: Waiting up to 5m0s for pod "client-containers-a64691a8-0b4d-4c53-b305-139b043ab308" in namespace "containers-5105" to be "success or failure" Apr 10 13:54:05.830: INFO: Pod "client-containers-a64691a8-0b4d-4c53-b305-139b043ab308": Phase="Pending", Reason="", readiness=false. Elapsed: 5.137841ms Apr 10 13:54:07.847: INFO: Pod "client-containers-a64691a8-0b4d-4c53-b305-139b043ab308": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022039927s Apr 10 13:54:09.853: INFO: Pod "client-containers-a64691a8-0b4d-4c53-b305-139b043ab308": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028297043s STEP: Saw pod success Apr 10 13:54:09.853: INFO: Pod "client-containers-a64691a8-0b4d-4c53-b305-139b043ab308" satisfied condition "success or failure" Apr 10 13:54:09.857: INFO: Trying to get logs from node iruya-worker2 pod client-containers-a64691a8-0b4d-4c53-b305-139b043ab308 container test-container: STEP: delete the pod Apr 10 13:54:10.030: INFO: Waiting for pod client-containers-a64691a8-0b4d-4c53-b305-139b043ab308 to disappear Apr 10 13:54:10.043: INFO: Pod client-containers-a64691a8-0b4d-4c53-b305-139b043ab308 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:54:10.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5105" for this suite. Apr 10 13:54:16.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:54:16.135: INFO: namespace containers-5105 deletion completed in 6.088414552s • [SLOW TEST:10.394 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:54:16.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-cccb2ffb-ccd3-4700-a2ef-f0d0f78c9d15 in namespace container-probe-9084 Apr 10 13:54:20.288: INFO: Started pod test-webserver-cccb2ffb-ccd3-4700-a2ef-f0d0f78c9d15 in namespace container-probe-9084 STEP: checking the pod's current state and verifying that restartCount is present Apr 10 13:54:20.290: INFO: Initial restart count of pod test-webserver-cccb2ffb-ccd3-4700-a2ef-f0d0f78c9d15 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:58:20.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9084" for this suite. Apr 10 13:58:26.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:58:26.953: INFO: namespace container-probe-9084 deletion completed in 6.114564222s • [SLOW TEST:250.818 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:58:26.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 10 13:58:27.039: INFO: Waiting up to 5m0s for pod "client-containers-f020e6cd-18a6-47d5-ac12-d1abff47d0b9" in namespace "containers-9987" to be "success or failure" Apr 10 13:58:27.043: INFO: Pod "client-containers-f020e6cd-18a6-47d5-ac12-d1abff47d0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.324761ms Apr 10 13:58:29.047: INFO: Pod "client-containers-f020e6cd-18a6-47d5-ac12-d1abff47d0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007392245s Apr 10 13:58:31.051: INFO: Pod "client-containers-f020e6cd-18a6-47d5-ac12-d1abff47d0b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011964532s STEP: Saw pod success Apr 10 13:58:31.051: INFO: Pod "client-containers-f020e6cd-18a6-47d5-ac12-d1abff47d0b9" satisfied condition "success or failure" Apr 10 13:58:31.054: INFO: Trying to get logs from node iruya-worker2 pod client-containers-f020e6cd-18a6-47d5-ac12-d1abff47d0b9 container test-container: STEP: delete the pod Apr 10 13:58:31.080: INFO: Waiting for pod client-containers-f020e6cd-18a6-47d5-ac12-d1abff47d0b9 to disappear Apr 10 13:58:31.086: INFO: Pod client-containers-f020e6cd-18a6-47d5-ac12-d1abff47d0b9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:58:31.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9987" for this suite. Apr 10 13:58:37.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:58:37.205: INFO: namespace containers-9987 deletion completed in 6.116069349s • [SLOW TEST:10.251 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:58:37.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 10 13:58:37.282: INFO: Waiting up to 5m0s for pod "var-expansion-ba415068-dc6e-47f2-be8e-af70a8710996" in namespace "var-expansion-8231" to be "success or failure" Apr 10 13:58:37.285: INFO: Pod "var-expansion-ba415068-dc6e-47f2-be8e-af70a8710996": Phase="Pending", Reason="", readiness=false. Elapsed: 3.1893ms Apr 10 13:58:39.289: INFO: Pod "var-expansion-ba415068-dc6e-47f2-be8e-af70a8710996": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006685796s Apr 10 13:58:41.292: INFO: Pod "var-expansion-ba415068-dc6e-47f2-be8e-af70a8710996": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010555822s STEP: Saw pod success Apr 10 13:58:41.292: INFO: Pod "var-expansion-ba415068-dc6e-47f2-be8e-af70a8710996" satisfied condition "success or failure" Apr 10 13:58:41.295: INFO: Trying to get logs from node iruya-worker pod var-expansion-ba415068-dc6e-47f2-be8e-af70a8710996 container dapi-container: STEP: delete the pod Apr 10 13:58:41.324: INFO: Waiting for pod var-expansion-ba415068-dc6e-47f2-be8e-af70a8710996 to disappear Apr 10 13:58:41.342: INFO: Pod var-expansion-ba415068-dc6e-47f2-be8e-af70a8710996 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:58:41.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8231" for this suite. Apr 10 13:58:47.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:58:47.451: INFO: namespace var-expansion-8231 deletion completed in 6.106231411s • [SLOW TEST:10.245 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:58:47.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:58:53.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1222" for this suite. Apr 10 13:58:59.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:58:59.784: INFO: namespace namespaces-1222 deletion completed in 6.096694562s STEP: Destroying namespace "nsdeletetest-9845" for this suite. Apr 10 13:58:59.786: INFO: Namespace nsdeletetest-9845 was already deleted STEP: Destroying namespace "nsdeletetest-2970" for this suite. Apr 10 13:59:05.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:59:05.882: INFO: namespace nsdeletetest-2970 deletion completed in 6.09608593s • [SLOW TEST:18.431 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:59:05.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 10 13:59:05.935: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:59:13.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9504" for this suite. Apr 10 13:59:35.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:59:35.264: INFO: namespace init-container-9504 deletion completed in 22.100784961s • [SLOW TEST:29.382 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:59:35.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-506/configmap-test-4cc6cbde-dee7-47af-b98b-6dad6a4c3f2c STEP: Creating a pod to test consume configMaps Apr 10 13:59:35.328: INFO: Waiting up to 5m0s for pod "pod-configmaps-a343b54e-82c0-46bb-bf39-59ac2af34d4e" in namespace "configmap-506" to be "success or failure" Apr 10 13:59:35.331: INFO: Pod "pod-configmaps-a343b54e-82c0-46bb-bf39-59ac2af34d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.418979ms Apr 10 13:59:37.336: INFO: Pod "pod-configmaps-a343b54e-82c0-46bb-bf39-59ac2af34d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008072924s Apr 10 13:59:39.340: INFO: Pod "pod-configmaps-a343b54e-82c0-46bb-bf39-59ac2af34d4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012250391s STEP: Saw pod success Apr 10 13:59:39.340: INFO: Pod "pod-configmaps-a343b54e-82c0-46bb-bf39-59ac2af34d4e" satisfied condition "success or failure" Apr 10 13:59:39.344: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-a343b54e-82c0-46bb-bf39-59ac2af34d4e container env-test: STEP: delete the pod Apr 10 13:59:39.375: INFO: Waiting for pod pod-configmaps-a343b54e-82c0-46bb-bf39-59ac2af34d4e to disappear Apr 10 13:59:39.396: INFO: Pod pod-configmaps-a343b54e-82c0-46bb-bf39-59ac2af34d4e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:59:39.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-506" for this suite. Apr 10 13:59:45.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:59:45.489: INFO: namespace configmap-506 deletion completed in 6.089646985s • [SLOW TEST:10.225 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:59:45.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-098c7fe1-045a-4ef9-981c-9a2cf58dd50f STEP: Creating a pod to test consume secrets Apr 10 13:59:45.623: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-825ccb11-df4a-408b-9316-77561f86ea5b" in namespace "projected-8196" to be "success or failure" Apr 10 13:59:45.631: INFO: Pod "pod-projected-secrets-825ccb11-df4a-408b-9316-77561f86ea5b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.710407ms Apr 10 13:59:47.634: INFO: Pod "pod-projected-secrets-825ccb11-df4a-408b-9316-77561f86ea5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01122679s Apr 10 13:59:49.639: INFO: Pod "pod-projected-secrets-825ccb11-df4a-408b-9316-77561f86ea5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015659448s STEP: Saw pod success Apr 10 13:59:49.639: INFO: Pod "pod-projected-secrets-825ccb11-df4a-408b-9316-77561f86ea5b" satisfied condition "success or failure" Apr 10 13:59:49.642: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-825ccb11-df4a-408b-9316-77561f86ea5b container projected-secret-volume-test: STEP: delete the pod Apr 10 13:59:49.663: INFO: Waiting for pod pod-projected-secrets-825ccb11-df4a-408b-9316-77561f86ea5b to disappear Apr 10 13:59:49.690: INFO: Pod pod-projected-secrets-825ccb11-df4a-408b-9316-77561f86ea5b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 13:59:49.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8196" for this suite. Apr 10 13:59:55.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 13:59:55.787: INFO: namespace projected-8196 deletion completed in 6.09354727s • [SLOW TEST:10.298 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 13:59:55.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-a37d35fc-9900-4bb6-a4e1-4f5d2f6549e2 in namespace container-probe-1158 Apr 10 13:59:59.886: INFO: Started pod liveness-a37d35fc-9900-4bb6-a4e1-4f5d2f6549e2 in namespace container-probe-1158 STEP: checking the pod's current state and verifying that restartCount is present Apr 10 13:59:59.889: INFO: Initial restart count of pod liveness-a37d35fc-9900-4bb6-a4e1-4f5d2f6549e2 is 0 Apr 10 14:00:21.948: INFO: Restart count of pod container-probe-1158/liveness-a37d35fc-9900-4bb6-a4e1-4f5d2f6549e2 is now 1 (22.058752065s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:00:21.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1158" for this suite. Apr 10 14:00:28.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:00:28.093: INFO: namespace container-probe-1158 deletion completed in 6.114968098s • [SLOW TEST:32.305 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:00:28.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:00:28.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5008" for this suite. Apr 10 14:00:50.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:00:50.365: INFO: namespace pods-5008 deletion completed in 22.141961831s • [SLOW TEST:22.272 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:00:50.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 10 14:00:50.436: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7716" to be "success or failure" Apr 10 14:00:50.440: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34699ms Apr 10 14:00:52.443: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00771657s Apr 10 14:00:54.448: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012252212s STEP: Saw pod success Apr 10 14:00:54.448: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 10 14:00:54.451: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 10 14:00:54.471: INFO: Waiting for pod pod-host-path-test to disappear Apr 10 14:00:54.476: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:00:54.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7716" for this suite. Apr 10 14:01:00.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:01:00.713: INFO: namespace hostpath-7716 deletion completed in 6.234267435s • [SLOW TEST:10.347 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:01:00.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-117fe282-549c-4eef-870d-518450c54ab6 STEP: Creating a pod to test consume secrets Apr 10 14:01:00.778: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-73c04124-9f50-4f8d-9663-a79be357c47c" in namespace "projected-8933" to be "success or failure" Apr 10 14:01:00.782: INFO: Pod "pod-projected-secrets-73c04124-9f50-4f8d-9663-a79be357c47c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.001849ms Apr 10 14:01:02.786: INFO: Pod "pod-projected-secrets-73c04124-9f50-4f8d-9663-a79be357c47c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008468297s Apr 10 14:01:04.791: INFO: Pod "pod-projected-secrets-73c04124-9f50-4f8d-9663-a79be357c47c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01277614s STEP: Saw pod success Apr 10 14:01:04.791: INFO: Pod "pod-projected-secrets-73c04124-9f50-4f8d-9663-a79be357c47c" satisfied condition "success or failure" Apr 10 14:01:04.794: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-73c04124-9f50-4f8d-9663-a79be357c47c container projected-secret-volume-test: STEP: delete the pod Apr 10 14:01:04.829: INFO: Waiting for pod pod-projected-secrets-73c04124-9f50-4f8d-9663-a79be357c47c to disappear Apr 10 14:01:04.854: INFO: Pod pod-projected-secrets-73c04124-9f50-4f8d-9663-a79be357c47c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:01:04.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8933" for this suite. Apr 10 14:01:10.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:01:10.965: INFO: namespace projected-8933 deletion completed in 6.107700583s • [SLOW TEST:10.252 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:01:10.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 14:01:11.040: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 10 14:01:16.045: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 10 14:01:16.045: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 10 14:01:16.089: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-6718,SelfLink:/apis/apps/v1/namespaces/deployment-6718/deployments/test-cleanup-deployment,UID:70f50246-b338-416d-ba01-c8ab542b4f53,ResourceVersion:4673207,Generation:1,CreationTimestamp:2020-04-10 14:01:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 10 14:01:16.101: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-6718,SelfLink:/apis/apps/v1/namespaces/deployment-6718/replicasets/test-cleanup-deployment-55bbcbc84c,UID:cc30d34c-cfc4-4c45-9381-02182d82140d,ResourceVersion:4673209,Generation:1,CreationTimestamp:2020-04-10 14:01:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 70f50246-b338-416d-ba01-c8ab542b4f53 0xc001fbcc17 0xc001fbcc18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 10 14:01:16.101: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 10 14:01:16.101: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-6718,SelfLink:/apis/apps/v1/namespaces/deployment-6718/replicasets/test-cleanup-controller,UID:99eeb3f9-9345-45e0-9621-e189417f0161,ResourceVersion:4673208,Generation:1,CreationTimestamp:2020-04-10 14:01:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 70f50246-b338-416d-ba01-c8ab542b4f53 0xc001fbcb47 0xc001fbcb48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 10 14:01:16.153: INFO: Pod "test-cleanup-controller-56dpb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-56dpb,GenerateName:test-cleanup-controller-,Namespace:deployment-6718,SelfLink:/api/v1/namespaces/deployment-6718/pods/test-cleanup-controller-56dpb,UID:e7ea769d-c89c-4a93-abb2-66afb242d085,ResourceVersion:4673202,Generation:0,CreationTimestamp:2020-04-10 14:01:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 99eeb3f9-9345-45e0-9621-e189417f0161 0xc002bf2ea7 0xc002bf2ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8c8nc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8c8nc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8c8nc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002bf2f20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002bf2f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:01:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:01:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:01:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:01:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.165,StartTime:2020-04-10 14:01:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-10 14:01:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://192a07bbf038f7d7c6810522c7be560393960c2377095f99fe373d2bc9dc9500}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 10 14:01:16.154: INFO: Pod "test-cleanup-deployment-55bbcbc84c-kq48x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-kq48x,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-6718,SelfLink:/api/v1/namespaces/deployment-6718/pods/test-cleanup-deployment-55bbcbc84c-kq48x,UID:3ad3a731-e32a-471d-8d9c-123f8a1a2977,ResourceVersion:4673215,Generation:0,CreationTimestamp:2020-04-10 14:01:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c cc30d34c-cfc4-4c45-9381-02182d82140d 0xc002bf3027 0xc002bf3028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8c8nc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8c8nc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-8c8nc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002bf30c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002bf30e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:01:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:01:16.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6718" for this suite. Apr 10 14:01:22.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:01:22.314: INFO: namespace deployment-6718 deletion completed in 6.095465278s • [SLOW TEST:11.349 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:01:22.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 14:01:22.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-169f7d47-a889-4da6-a8fa-02bf02899f96" in namespace "downward-api-3181" to be "success or failure" Apr 10 14:01:22.449: INFO: Pod "downwardapi-volume-169f7d47-a889-4da6-a8fa-02bf02899f96": Phase="Pending", Reason="", readiness=false. Elapsed: 35.031981ms Apr 10 14:01:24.453: INFO: Pod "downwardapi-volume-169f7d47-a889-4da6-a8fa-02bf02899f96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038949891s Apr 10 14:01:26.457: INFO: Pod "downwardapi-volume-169f7d47-a889-4da6-a8fa-02bf02899f96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04314841s STEP: Saw pod success Apr 10 14:01:26.457: INFO: Pod "downwardapi-volume-169f7d47-a889-4da6-a8fa-02bf02899f96" satisfied condition "success or failure" Apr 10 14:01:26.460: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-169f7d47-a889-4da6-a8fa-02bf02899f96 container client-container: STEP: delete the pod Apr 10 14:01:26.478: INFO: Waiting for pod downwardapi-volume-169f7d47-a889-4da6-a8fa-02bf02899f96 to disappear Apr 10 14:01:26.496: INFO: Pod downwardapi-volume-169f7d47-a889-4da6-a8fa-02bf02899f96 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:01:26.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3181" for this suite. Apr 10 14:01:32.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:01:32.655: INFO: namespace downward-api-3181 deletion completed in 6.154697644s • [SLOW TEST:10.341 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:01:32.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 14:01:32.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5469b858-6354-4eca-948b-a1cbdf3632b9" in namespace "projected-256" to be "success or failure" Apr 10 14:01:32.734: INFO: Pod "downwardapi-volume-5469b858-6354-4eca-948b-a1cbdf3632b9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.721401ms Apr 10 14:01:34.741: INFO: Pod "downwardapi-volume-5469b858-6354-4eca-948b-a1cbdf3632b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012494815s Apr 10 14:01:36.750: INFO: Pod "downwardapi-volume-5469b858-6354-4eca-948b-a1cbdf3632b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021745659s STEP: Saw pod success Apr 10 14:01:36.750: INFO: Pod "downwardapi-volume-5469b858-6354-4eca-948b-a1cbdf3632b9" satisfied condition "success or failure" Apr 10 14:01:36.753: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-5469b858-6354-4eca-948b-a1cbdf3632b9 container client-container: STEP: delete the pod Apr 10 14:01:36.780: INFO: Waiting for pod downwardapi-volume-5469b858-6354-4eca-948b-a1cbdf3632b9 to disappear Apr 10 14:01:36.841: INFO: Pod downwardapi-volume-5469b858-6354-4eca-948b-a1cbdf3632b9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:01:36.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-256" for this suite. Apr 10 14:01:42.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:01:42.941: INFO: namespace projected-256 deletion completed in 6.096817189s • [SLOW TEST:10.286 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:01:42.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 10 14:01:49.900: INFO: 6 pods remaining Apr 10 14:01:49.900: INFO: 0 pods has nil DeletionTimestamp Apr 10 14:01:49.900: INFO: Apr 10 14:01:50.614: INFO: 0 pods remaining Apr 10 14:01:50.614: INFO: 0 pods has nil DeletionTimestamp Apr 10 14:01:50.614: INFO: Apr 10 14:01:51.417: INFO: 0 pods remaining Apr 10 14:01:51.417: INFO: 0 pods has nil DeletionTimestamp Apr 10 14:01:51.417: INFO: STEP: Gathering metrics W0410 14:01:52.132920 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 14:01:52.132: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:01:52.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8765" for this suite. Apr 10 14:01:58.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:01:58.648: INFO: namespace gc-8765 deletion completed in 6.427703805s • [SLOW TEST:15.706 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:01:58.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 10 14:01:58.897: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:02:12.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-599" for this suite. Apr 10 14:02:18.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:02:18.283: INFO: namespace pods-599 deletion completed in 6.093475022s • [SLOW TEST:19.635 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:02:18.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-e54cb44c-3766-4acf-ad39-4fc410a4b04f STEP: Creating a pod to test consume configMaps Apr 10 14:02:18.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-a89a02bf-6950-4964-a00d-151ddfae2485" in namespace "configmap-9663" to be "success or failure" Apr 10 14:02:18.352: INFO: Pod "pod-configmaps-a89a02bf-6950-4964-a00d-151ddfae2485": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21362ms Apr 10 14:02:20.356: INFO: Pod "pod-configmaps-a89a02bf-6950-4964-a00d-151ddfae2485": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008043199s Apr 10 14:02:22.410: INFO: Pod "pod-configmaps-a89a02bf-6950-4964-a00d-151ddfae2485": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062234888s STEP: Saw pod success Apr 10 14:02:22.410: INFO: Pod "pod-configmaps-a89a02bf-6950-4964-a00d-151ddfae2485" satisfied condition "success or failure" Apr 10 14:02:22.413: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-a89a02bf-6950-4964-a00d-151ddfae2485 container configmap-volume-test: STEP: delete the pod Apr 10 14:02:22.482: INFO: Waiting for pod pod-configmaps-a89a02bf-6950-4964-a00d-151ddfae2485 to disappear Apr 10 14:02:22.485: INFO: Pod pod-configmaps-a89a02bf-6950-4964-a00d-151ddfae2485 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:02:22.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9663" for this suite. Apr 10 14:02:28.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:02:28.600: INFO: namespace configmap-9663 deletion completed in 6.111903509s • [SLOW TEST:10.318 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:02:28.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 10 14:02:33.215: INFO: Successfully updated pod "annotationupdateabccb574-2c77-4df4-a875-70a8b50c481a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:02:35.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4549" for this suite. Apr 10 14:02:57.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:02:57.332: INFO: namespace projected-4549 deletion completed in 22.094994477s • [SLOW TEST:28.731 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:02:57.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 10 14:02:57.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 10 14:02:57.544: INFO: stderr: "" Apr 10 14:02:57.544: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:02:57.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5766" for this suite. Apr 10 14:03:03.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:03:03.642: INFO: namespace kubectl-5766 deletion completed in 6.094156104s • [SLOW TEST:6.309 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:03:03.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 10 14:03:03.719: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:03:03.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3443" for this suite. Apr 10 14:03:09.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:03:09.886: INFO: namespace kubectl-3443 deletion completed in 6.084732419s • [SLOW TEST:6.244 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:03:09.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 14:03:31.977: INFO: Container started at 2020-04-10 14:03:12 +0000 UTC, pod became ready at 2020-04-10 14:03:31 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:03:31.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7211" for this suite. Apr 10 14:03:53.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:03:54.081: INFO: namespace container-probe-7211 deletion completed in 22.099469305s • [SLOW TEST:44.194 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:03:54.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0f368e69-7a1d-48c7-a174-1494f17265e7 STEP: Creating a pod to test consume secrets Apr 10 14:03:54.160: INFO: Waiting up to 5m0s for pod "pod-secrets-c51bbea8-117c-44ee-bbe1-839a130549cf" in namespace "secrets-7939" to be "success or failure" Apr 10 14:03:54.170: INFO: Pod "pod-secrets-c51bbea8-117c-44ee-bbe1-839a130549cf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.984996ms Apr 10 14:03:56.174: INFO: Pod "pod-secrets-c51bbea8-117c-44ee-bbe1-839a130549cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014539274s Apr 10 14:03:58.178: INFO: Pod "pod-secrets-c51bbea8-117c-44ee-bbe1-839a130549cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018702872s STEP: Saw pod success Apr 10 14:03:58.178: INFO: Pod "pod-secrets-c51bbea8-117c-44ee-bbe1-839a130549cf" satisfied condition "success or failure" Apr 10 14:03:58.181: INFO: Trying to get logs from node iruya-worker pod pod-secrets-c51bbea8-117c-44ee-bbe1-839a130549cf container secret-volume-test: STEP: delete the pod Apr 10 14:03:58.207: INFO: Waiting for pod pod-secrets-c51bbea8-117c-44ee-bbe1-839a130549cf to disappear Apr 10 14:03:58.256: INFO: Pod pod-secrets-c51bbea8-117c-44ee-bbe1-839a130549cf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:03:58.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7939" for this suite. Apr 10 14:04:04.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:04:04.360: INFO: namespace secrets-7939 deletion completed in 6.100390845s • [SLOW TEST:10.279 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:04:04.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 14:04:04.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-306edc63-c1b4-479d-b6b1-0a05a8989ad6" in namespace "downward-api-4248" to be "success or failure" Apr 10 14:04:04.431: INFO: Pod "downwardapi-volume-306edc63-c1b4-479d-b6b1-0a05a8989ad6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.831752ms Apr 10 14:04:06.435: INFO: Pod "downwardapi-volume-306edc63-c1b4-479d-b6b1-0a05a8989ad6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009543969s Apr 10 14:04:08.439: INFO: Pod "downwardapi-volume-306edc63-c1b4-479d-b6b1-0a05a8989ad6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014023101s STEP: Saw pod success Apr 10 14:04:08.440: INFO: Pod "downwardapi-volume-306edc63-c1b4-479d-b6b1-0a05a8989ad6" satisfied condition "success or failure" Apr 10 14:04:08.443: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-306edc63-c1b4-479d-b6b1-0a05a8989ad6 container client-container: STEP: delete the pod Apr 10 14:04:08.463: INFO: Waiting for pod downwardapi-volume-306edc63-c1b4-479d-b6b1-0a05a8989ad6 to disappear Apr 10 14:04:08.467: INFO: Pod downwardapi-volume-306edc63-c1b4-479d-b6b1-0a05a8989ad6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:04:08.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4248" for this suite. Apr 10 14:04:14.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:04:14.559: INFO: namespace downward-api-4248 deletion completed in 6.088855621s • [SLOW TEST:10.199 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:04:14.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 14:04:14.649: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33290b06-68ac-4fb1-87a7-4add185be4a6" in namespace "downward-api-2700" to be "success or failure" Apr 10 14:04:14.667: INFO: Pod "downwardapi-volume-33290b06-68ac-4fb1-87a7-4add185be4a6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.68642ms Apr 10 14:04:16.671: INFO: Pod "downwardapi-volume-33290b06-68ac-4fb1-87a7-4add185be4a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021953956s Apr 10 14:04:18.687: INFO: Pod "downwardapi-volume-33290b06-68ac-4fb1-87a7-4add185be4a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0379537s STEP: Saw pod success Apr 10 14:04:18.687: INFO: Pod "downwardapi-volume-33290b06-68ac-4fb1-87a7-4add185be4a6" satisfied condition "success or failure" Apr 10 14:04:18.690: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-33290b06-68ac-4fb1-87a7-4add185be4a6 container client-container: STEP: delete the pod Apr 10 14:04:18.710: INFO: Waiting for pod downwardapi-volume-33290b06-68ac-4fb1-87a7-4add185be4a6 to disappear Apr 10 14:04:18.715: INFO: Pod downwardapi-volume-33290b06-68ac-4fb1-87a7-4add185be4a6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:04:18.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2700" for this suite. Apr 10 14:04:24.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:04:24.815: INFO: namespace downward-api-2700 deletion completed in 6.096577452s • [SLOW TEST:10.255 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:04:24.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 10 14:04:32.976: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:32.991: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:34.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:34.995: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:36.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:36.995: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:38.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:38.999: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:40.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:40.995: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:42.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:42.995: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:44.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:44.994: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:46.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:46.994: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:48.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:48.995: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:50.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:51.011: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:52.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:52.995: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:54.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:54.995: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:56.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:56.995: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:04:58.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:04:58.995: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:05:00.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:05:01.006: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 14:05:02.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 14:05:02.995: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:05:02.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2448" for this suite. Apr 10 14:05:25.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:05:25.118: INFO: namespace container-lifecycle-hook-2448 deletion completed in 22.118501062s • [SLOW TEST:60.303 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:05:25.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-e68f89f7-07e6-47c0-ba75-32ec24635583 STEP: Creating a pod to test consume configMaps Apr 10 14:05:25.261: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d3ac41b-b7a4-48a9-8aa0-84fb34cea63c" in namespace "configmap-354" to be "success or failure" Apr 10 14:05:25.273: INFO: Pod "pod-configmaps-1d3ac41b-b7a4-48a9-8aa0-84fb34cea63c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.387782ms Apr 10 14:05:27.303: INFO: Pod "pod-configmaps-1d3ac41b-b7a4-48a9-8aa0-84fb34cea63c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042756596s Apr 10 14:05:29.307: INFO: Pod "pod-configmaps-1d3ac41b-b7a4-48a9-8aa0-84fb34cea63c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046767386s STEP: Saw pod success Apr 10 14:05:29.307: INFO: Pod "pod-configmaps-1d3ac41b-b7a4-48a9-8aa0-84fb34cea63c" satisfied condition "success or failure" Apr 10 14:05:29.311: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-1d3ac41b-b7a4-48a9-8aa0-84fb34cea63c container configmap-volume-test: STEP: delete the pod Apr 10 14:05:29.344: INFO: Waiting for pod pod-configmaps-1d3ac41b-b7a4-48a9-8aa0-84fb34cea63c to disappear Apr 10 14:05:29.356: INFO: Pod pod-configmaps-1d3ac41b-b7a4-48a9-8aa0-84fb34cea63c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:05:29.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-354" for this suite. Apr 10 14:05:35.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:05:35.452: INFO: namespace configmap-354 deletion completed in 6.092978811s • [SLOW TEST:10.334 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:05:35.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-c40eb1e1-6065-4278-999d-3d35b47ecdd1 in namespace container-probe-2756 Apr 10 14:05:39.553: INFO: Started pod liveness-c40eb1e1-6065-4278-999d-3d35b47ecdd1 in namespace container-probe-2756 STEP: checking the pod's current state and verifying that restartCount is present Apr 10 14:05:39.556: INFO: Initial restart count of pod liveness-c40eb1e1-6065-4278-999d-3d35b47ecdd1 is 0 Apr 10 14:05:57.596: INFO: Restart count of pod container-probe-2756/liveness-c40eb1e1-6065-4278-999d-3d35b47ecdd1 is now 1 (18.039566672s elapsed) Apr 10 14:06:17.651: INFO: Restart count of pod container-probe-2756/liveness-c40eb1e1-6065-4278-999d-3d35b47ecdd1 is now 2 (38.094787796s elapsed) Apr 10 14:06:37.730: INFO: Restart count of pod container-probe-2756/liveness-c40eb1e1-6065-4278-999d-3d35b47ecdd1 is now 3 (58.17369343s elapsed) Apr 10 14:06:57.772: INFO: Restart count of pod container-probe-2756/liveness-c40eb1e1-6065-4278-999d-3d35b47ecdd1 is now 4 (1m18.215619505s elapsed) Apr 10 14:07:59.994: INFO: Restart count of pod container-probe-2756/liveness-c40eb1e1-6065-4278-999d-3d35b47ecdd1 is now 5 (2m20.437597276s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:08:00.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2756" for this suite. Apr 10 14:08:06.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:08:06.159: INFO: namespace container-probe-2756 deletion completed in 6.146209654s • [SLOW TEST:150.706 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:08:06.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-22831bb8-1819-4686-90f0-62a2aac551a2 STEP: Creating a pod to test consume secrets Apr 10 14:08:06.235: INFO: Waiting up to 5m0s for pod "pod-secrets-dbcee793-6664-4eab-8ba3-aecaabf80002" in namespace "secrets-1637" to be "success or failure" Apr 10 14:08:06.253: INFO: Pod "pod-secrets-dbcee793-6664-4eab-8ba3-aecaabf80002": Phase="Pending", Reason="", readiness=false. Elapsed: 18.119765ms Apr 10 14:08:08.257: INFO: Pod "pod-secrets-dbcee793-6664-4eab-8ba3-aecaabf80002": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022311224s Apr 10 14:08:10.262: INFO: Pod "pod-secrets-dbcee793-6664-4eab-8ba3-aecaabf80002": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027540642s STEP: Saw pod success Apr 10 14:08:10.262: INFO: Pod "pod-secrets-dbcee793-6664-4eab-8ba3-aecaabf80002" satisfied condition "success or failure" Apr 10 14:08:10.265: INFO: Trying to get logs from node iruya-worker pod pod-secrets-dbcee793-6664-4eab-8ba3-aecaabf80002 container secret-volume-test: STEP: delete the pod Apr 10 14:08:10.280: INFO: Waiting for pod pod-secrets-dbcee793-6664-4eab-8ba3-aecaabf80002 to disappear Apr 10 14:08:10.285: INFO: Pod pod-secrets-dbcee793-6664-4eab-8ba3-aecaabf80002 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:08:10.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1637" for this suite. Apr 10 14:08:16.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:08:16.382: INFO: namespace secrets-1637 deletion completed in 6.095011408s • [SLOW TEST:10.223 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:08:16.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0410 14:08:46.967813 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 14:08:46.967: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:08:46.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-598" for this suite. Apr 10 14:08:52.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:08:53.064: INFO: namespace gc-598 deletion completed in 6.093201719s • [SLOW TEST:36.682 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:08:53.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 10 14:08:53.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2366' Apr 10 14:08:55.337: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 10 14:08:55.337: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 10 14:08:55.353: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-qz5p5] Apr 10 14:08:55.354: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-qz5p5" in namespace "kubectl-2366" to be "running and ready" Apr 10 14:08:55.364: INFO: Pod "e2e-test-nginx-rc-qz5p5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.618328ms Apr 10 14:08:57.368: INFO: Pod "e2e-test-nginx-rc-qz5p5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014149357s Apr 10 14:08:59.371: INFO: Pod "e2e-test-nginx-rc-qz5p5": Phase="Running", Reason="", readiness=true. Elapsed: 4.017327219s Apr 10 14:08:59.371: INFO: Pod "e2e-test-nginx-rc-qz5p5" satisfied condition "running and ready" Apr 10 14:08:59.371: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-qz5p5] Apr 10 14:08:59.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-2366' Apr 10 14:08:59.499: INFO: stderr: "" Apr 10 14:08:59.499: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 10 14:08:59.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2366' Apr 10 14:08:59.598: INFO: stderr: "" Apr 10 14:08:59.598: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:08:59.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2366" for this suite. Apr 10 14:09:05.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:09:05.734: INFO: namespace kubectl-2366 deletion completed in 6.116566405s • [SLOW TEST:12.670 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:09:05.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 10 14:09:05.828: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-472,SelfLink:/api/v1/namespaces/watch-472/configmaps/e2e-watch-test-label-changed,UID:98682508-3d4c-4220-a9a3-efcf40304b25,ResourceVersion:4674810,Generation:0,CreationTimestamp:2020-04-10 14:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 10 14:09:05.828: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-472,SelfLink:/api/v1/namespaces/watch-472/configmaps/e2e-watch-test-label-changed,UID:98682508-3d4c-4220-a9a3-efcf40304b25,ResourceVersion:4674811,Generation:0,CreationTimestamp:2020-04-10 14:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 10 14:09:05.828: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-472,SelfLink:/api/v1/namespaces/watch-472/configmaps/e2e-watch-test-label-changed,UID:98682508-3d4c-4220-a9a3-efcf40304b25,ResourceVersion:4674812,Generation:0,CreationTimestamp:2020-04-10 14:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 10 14:09:15.872: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-472,SelfLink:/api/v1/namespaces/watch-472/configmaps/e2e-watch-test-label-changed,UID:98682508-3d4c-4220-a9a3-efcf40304b25,ResourceVersion:4674834,Generation:0,CreationTimestamp:2020-04-10 14:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 10 14:09:15.872: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-472,SelfLink:/api/v1/namespaces/watch-472/configmaps/e2e-watch-test-label-changed,UID:98682508-3d4c-4220-a9a3-efcf40304b25,ResourceVersion:4674835,Generation:0,CreationTimestamp:2020-04-10 14:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 10 14:09:15.872: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-472,SelfLink:/api/v1/namespaces/watch-472/configmaps/e2e-watch-test-label-changed,UID:98682508-3d4c-4220-a9a3-efcf40304b25,ResourceVersion:4674836,Generation:0,CreationTimestamp:2020-04-10 14:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:09:15.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-472" for this suite. Apr 10 14:09:21.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:09:21.982: INFO: namespace watch-472 deletion completed in 6.104955553s • [SLOW TEST:16.248 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:09:21.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-4b232933-dbc9-45b1-8e18-b80b2dc91dc7 STEP: Creating a pod to test consume configMaps Apr 10 14:09:22.076: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-614d4d12-68f0-4420-ae7e-c7aabd5a7791" in namespace "projected-2761" to be "success or failure" Apr 10 14:09:22.080: INFO: Pod "pod-projected-configmaps-614d4d12-68f0-4420-ae7e-c7aabd5a7791": Phase="Pending", Reason="", readiness=false. Elapsed: 3.715009ms Apr 10 14:09:24.111: INFO: Pod "pod-projected-configmaps-614d4d12-68f0-4420-ae7e-c7aabd5a7791": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034736836s Apr 10 14:09:26.115: INFO: Pod "pod-projected-configmaps-614d4d12-68f0-4420-ae7e-c7aabd5a7791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039227788s STEP: Saw pod success Apr 10 14:09:26.115: INFO: Pod "pod-projected-configmaps-614d4d12-68f0-4420-ae7e-c7aabd5a7791" satisfied condition "success or failure" Apr 10 14:09:26.118: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-614d4d12-68f0-4420-ae7e-c7aabd5a7791 container projected-configmap-volume-test: STEP: delete the pod Apr 10 14:09:26.156: INFO: Waiting for pod pod-projected-configmaps-614d4d12-68f0-4420-ae7e-c7aabd5a7791 to disappear Apr 10 14:09:26.164: INFO: Pod pod-projected-configmaps-614d4d12-68f0-4420-ae7e-c7aabd5a7791 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:09:26.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2761" for this suite. Apr 10 14:09:32.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:09:32.279: INFO: namespace projected-2761 deletion completed in 6.112614258s • [SLOW TEST:10.296 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:09:32.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-55ec30d1-c3e6-4bc4-9949-437137dc7731 STEP: Creating a pod to test consume configMaps Apr 10 14:09:32.376: INFO: Waiting up to 5m0s for pod "pod-configmaps-4eecf326-acf0-4558-b84f-5797c4f9fc2c" in namespace "configmap-8315" to be "success or failure" Apr 10 14:09:32.391: INFO: Pod "pod-configmaps-4eecf326-acf0-4558-b84f-5797c4f9fc2c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.562745ms Apr 10 14:09:34.394: INFO: Pod "pod-configmaps-4eecf326-acf0-4558-b84f-5797c4f9fc2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017668904s Apr 10 14:09:36.415: INFO: Pod "pod-configmaps-4eecf326-acf0-4558-b84f-5797c4f9fc2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03895605s STEP: Saw pod success Apr 10 14:09:36.415: INFO: Pod "pod-configmaps-4eecf326-acf0-4558-b84f-5797c4f9fc2c" satisfied condition "success or failure" Apr 10 14:09:36.419: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4eecf326-acf0-4558-b84f-5797c4f9fc2c container configmap-volume-test: STEP: delete the pod Apr 10 14:09:36.438: INFO: Waiting for pod pod-configmaps-4eecf326-acf0-4558-b84f-5797c4f9fc2c to disappear Apr 10 14:09:36.442: INFO: Pod pod-configmaps-4eecf326-acf0-4558-b84f-5797c4f9fc2c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:09:36.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8315" for this suite. Apr 10 14:09:42.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:09:42.550: INFO: namespace configmap-8315 deletion completed in 6.105490142s • [SLOW TEST:10.271 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:09:42.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 10 14:09:43.118: INFO: Pod name wrapped-volume-race-70fb7b7d-5315-40d0-8ada-c14a98a44eea: Found 0 pods out of 5 Apr 10 14:09:48.126: INFO: Pod name wrapped-volume-race-70fb7b7d-5315-40d0-8ada-c14a98a44eea: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-70fb7b7d-5315-40d0-8ada-c14a98a44eea in namespace emptydir-wrapper-2956, will wait for the garbage collector to delete the pods Apr 10 14:10:02.211: INFO: Deleting ReplicationController wrapped-volume-race-70fb7b7d-5315-40d0-8ada-c14a98a44eea took: 7.679502ms Apr 10 14:10:02.511: INFO: Terminating ReplicationController wrapped-volume-race-70fb7b7d-5315-40d0-8ada-c14a98a44eea pods took: 300.255793ms STEP: Creating RC which spawns configmap-volume pods Apr 10 14:10:43.240: INFO: Pod name wrapped-volume-race-800a33c5-930b-4a7a-959e-51340efb6dd3: Found 0 pods out of 5 Apr 10 14:10:48.248: INFO: Pod name wrapped-volume-race-800a33c5-930b-4a7a-959e-51340efb6dd3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-800a33c5-930b-4a7a-959e-51340efb6dd3 in namespace emptydir-wrapper-2956, will wait for the garbage collector to delete the pods Apr 10 14:11:02.335: INFO: Deleting ReplicationController wrapped-volume-race-800a33c5-930b-4a7a-959e-51340efb6dd3 took: 8.392877ms Apr 10 14:11:02.635: INFO: Terminating ReplicationController wrapped-volume-race-800a33c5-930b-4a7a-959e-51340efb6dd3 pods took: 300.289831ms STEP: Creating RC which spawns configmap-volume pods Apr 10 14:11:43.291: INFO: Pod name wrapped-volume-race-e7fc9b26-917d-4d79-ab01-66d42b4453a2: Found 0 pods out of 5 Apr 10 14:11:48.298: INFO: Pod name wrapped-volume-race-e7fc9b26-917d-4d79-ab01-66d42b4453a2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e7fc9b26-917d-4d79-ab01-66d42b4453a2 in namespace emptydir-wrapper-2956, will wait for the garbage collector to delete the pods Apr 10 14:12:02.406: INFO: Deleting ReplicationController wrapped-volume-race-e7fc9b26-917d-4d79-ab01-66d42b4453a2 took: 6.812757ms Apr 10 14:12:02.707: INFO: Terminating ReplicationController wrapped-volume-race-e7fc9b26-917d-4d79-ab01-66d42b4453a2 pods took: 300.262995ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:12:43.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2956" for this suite. Apr 10 14:12:51.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:12:51.812: INFO: namespace emptydir-wrapper-2956 deletion completed in 8.088486994s • [SLOW TEST:189.262 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:12:51.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4394 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 10 14:12:51.909: INFO: Found 0 stateful pods, waiting for 3 Apr 10 14:13:01.923: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 14:13:01.923: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 14:13:01.923: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 10 14:13:11.914: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 14:13:11.914: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 14:13:11.914: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 10 14:13:11.941: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 10 14:13:22.002: INFO: Updating stateful set ss2 Apr 10 14:13:22.010: INFO: Waiting for Pod statefulset-4394/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 10 14:13:32.156: INFO: Found 2 stateful pods, waiting for 3 Apr 10 14:13:42.161: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 14:13:42.161: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 14:13:42.161: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 10 14:13:42.184: INFO: Updating stateful set ss2 Apr 10 14:13:42.198: INFO: Waiting for Pod statefulset-4394/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 10 14:13:52.270: INFO: Updating stateful set ss2 Apr 10 14:13:52.313: INFO: Waiting for StatefulSet statefulset-4394/ss2 to complete update Apr 10 14:13:52.313: INFO: Waiting for Pod statefulset-4394/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 10 14:14:02.322: INFO: Deleting all statefulset in ns statefulset-4394 Apr 10 14:14:02.325: INFO: Scaling statefulset ss2 to 0 Apr 10 14:14:32.340: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 14:14:32.344: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:14:32.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4394" for this suite. Apr 10 14:14:38.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:14:38.447: INFO: namespace statefulset-4394 deletion completed in 6.086746641s • [SLOW TEST:106.635 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:14:38.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 10 14:14:38.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5013' Apr 10 14:14:38.764: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 10 14:14:38.764: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 10 14:14:38.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5013' Apr 10 14:14:38.911: INFO: stderr: "" Apr 10 14:14:38.911: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:14:38.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5013" for this suite. Apr 10 14:14:44.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:14:45.016: INFO: namespace kubectl-5013 deletion completed in 6.101476698s • [SLOW TEST:6.568 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:14:45.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 10 14:14:45.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6755' Apr 10 14:14:45.177: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 10 14:14:45.177: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Apr 10 14:14:45.207: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 10 14:14:45.211: INFO: scanned /root for discovery docs: Apr 10 14:14:45.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6755' Apr 10 14:15:01.061: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 10 14:15:01.061: INFO: stdout: "Created e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc\nScaling up e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 10 14:15:01.061: INFO: stdout: "Created e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc\nScaling up e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 10 14:15:01.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6755' Apr 10 14:15:01.167: INFO: stderr: "" Apr 10 14:15:01.167: INFO: stdout: "e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc-j5skh " Apr 10 14:15:01.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc-j5skh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6755' Apr 10 14:15:01.266: INFO: stderr: "" Apr 10 14:15:01.266: INFO: stdout: "true" Apr 10 14:15:01.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc-j5skh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6755' Apr 10 14:15:01.373: INFO: stderr: "" Apr 10 14:15:01.373: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 10 14:15:01.373: INFO: e2e-test-nginx-rc-df4fdd2ebb904ea62021041b3d292bcc-j5skh is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 10 14:15:01.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6755' Apr 10 14:15:01.487: INFO: stderr: "" Apr 10 14:15:01.487: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:15:01.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6755" for this suite. Apr 10 14:15:07.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:15:07.587: INFO: namespace kubectl-6755 deletion completed in 6.086784603s • [SLOW TEST:22.571 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:15:07.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 10 14:15:07.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-9051' Apr 10 14:15:07.780: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 10 14:15:07.780: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 10 14:15:09.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9051' Apr 10 14:15:09.935: INFO: stderr: "" Apr 10 14:15:09.935: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:15:09.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9051" for this suite. Apr 10 14:16:31.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:16:32.049: INFO: namespace kubectl-9051 deletion completed in 1m22.111173194s • [SLOW TEST:84.462 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:16:32.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 14:16:32.107: INFO: Waiting up to 5m0s for pod "downwardapi-volume-230635af-44c2-4ae5-85f6-35c7eb13380c" in namespace "downward-api-8832" to be "success or failure" Apr 10 14:16:32.154: INFO: Pod "downwardapi-volume-230635af-44c2-4ae5-85f6-35c7eb13380c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.237225ms Apr 10 14:16:34.158: INFO: Pod "downwardapi-volume-230635af-44c2-4ae5-85f6-35c7eb13380c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050549254s Apr 10 14:16:36.162: INFO: Pod "downwardapi-volume-230635af-44c2-4ae5-85f6-35c7eb13380c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055029385s STEP: Saw pod success Apr 10 14:16:36.163: INFO: Pod "downwardapi-volume-230635af-44c2-4ae5-85f6-35c7eb13380c" satisfied condition "success or failure" Apr 10 14:16:36.165: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-230635af-44c2-4ae5-85f6-35c7eb13380c container client-container: STEP: delete the pod Apr 10 14:16:36.185: INFO: Waiting for pod downwardapi-volume-230635af-44c2-4ae5-85f6-35c7eb13380c to disappear Apr 10 14:16:36.201: INFO: Pod downwardapi-volume-230635af-44c2-4ae5-85f6-35c7eb13380c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:16:36.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8832" for this suite. Apr 10 14:16:42.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:16:42.290: INFO: namespace downward-api-8832 deletion completed in 6.084339683s • [SLOW TEST:10.240 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:16:42.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 14:16:42.377: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c92dd29-8545-4b32-aa5f-e52ede973b5a" in namespace "projected-4904" to be "success or failure" Apr 10 14:16:42.395: INFO: Pod "downwardapi-volume-2c92dd29-8545-4b32-aa5f-e52ede973b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.463317ms Apr 10 14:16:44.399: INFO: Pod "downwardapi-volume-2c92dd29-8545-4b32-aa5f-e52ede973b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021511714s Apr 10 14:16:46.403: INFO: Pod "downwardapi-volume-2c92dd29-8545-4b32-aa5f-e52ede973b5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025941655s STEP: Saw pod success Apr 10 14:16:46.403: INFO: Pod "downwardapi-volume-2c92dd29-8545-4b32-aa5f-e52ede973b5a" satisfied condition "success or failure" Apr 10 14:16:46.406: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2c92dd29-8545-4b32-aa5f-e52ede973b5a container client-container: STEP: delete the pod Apr 10 14:16:46.442: INFO: Waiting for pod downwardapi-volume-2c92dd29-8545-4b32-aa5f-e52ede973b5a to disappear Apr 10 14:16:46.491: INFO: Pod downwardapi-volume-2c92dd29-8545-4b32-aa5f-e52ede973b5a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:16:46.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4904" for this suite. Apr 10 14:16:52.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:16:52.591: INFO: namespace projected-4904 deletion completed in 6.095798073s • [SLOW TEST:10.301 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:16:52.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0410 14:17:02.686702 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 14:17:02.686: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:17:02.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8055" for this suite. Apr 10 14:17:08.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:17:08.850: INFO: namespace gc-8055 deletion completed in 6.161442381s • [SLOW TEST:16.260 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:17:08.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-08b7a804-9712-4c22-a990-5049a8b468d8 in namespace container-probe-4045 Apr 10 14:17:12.924: INFO: Started pod busybox-08b7a804-9712-4c22-a990-5049a8b468d8 in namespace container-probe-4045 STEP: checking the pod's current state and verifying that restartCount is present Apr 10 14:17:12.927: INFO: Initial restart count of pod busybox-08b7a804-9712-4c22-a990-5049a8b468d8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:21:13.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4045" for this suite. Apr 10 14:21:19.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:21:19.685: INFO: namespace container-probe-4045 deletion completed in 6.152664023s • [SLOW TEST:250.834 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:21:19.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 14:21:19.756: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d20cdbf-1c89-4093-a664-4054bcb7bd81" in namespace "projected-2052" to be "success or failure" Apr 10 14:21:19.769: INFO: Pod "downwardapi-volume-8d20cdbf-1c89-4093-a664-4054bcb7bd81": Phase="Pending", Reason="", readiness=false. Elapsed: 12.812971ms Apr 10 14:21:21.774: INFO: Pod "downwardapi-volume-8d20cdbf-1c89-4093-a664-4054bcb7bd81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017540446s Apr 10 14:21:23.778: INFO: Pod "downwardapi-volume-8d20cdbf-1c89-4093-a664-4054bcb7bd81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02182773s STEP: Saw pod success Apr 10 14:21:23.778: INFO: Pod "downwardapi-volume-8d20cdbf-1c89-4093-a664-4054bcb7bd81" satisfied condition "success or failure" Apr 10 14:21:23.782: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8d20cdbf-1c89-4093-a664-4054bcb7bd81 container client-container: STEP: delete the pod Apr 10 14:21:23.839: INFO: Waiting for pod downwardapi-volume-8d20cdbf-1c89-4093-a664-4054bcb7bd81 to disappear Apr 10 14:21:23.872: INFO: Pod downwardapi-volume-8d20cdbf-1c89-4093-a664-4054bcb7bd81 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:21:23.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2052" for this suite. Apr 10 14:21:29.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:21:30.024: INFO: namespace projected-2052 deletion completed in 6.148366193s • [SLOW TEST:10.339 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:21:30.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 10 14:21:30.078: INFO: Waiting up to 5m0s for pod "downward-api-0dd78e31-318d-4f8a-be8e-524fb4876d03" in namespace "downward-api-4502" to be "success or failure" Apr 10 14:21:30.094: INFO: Pod "downward-api-0dd78e31-318d-4f8a-be8e-524fb4876d03": Phase="Pending", Reason="", readiness=false. Elapsed: 16.095907ms Apr 10 14:21:32.099: INFO: Pod "downward-api-0dd78e31-318d-4f8a-be8e-524fb4876d03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020493176s Apr 10 14:21:34.103: INFO: Pod "downward-api-0dd78e31-318d-4f8a-be8e-524fb4876d03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025022399s STEP: Saw pod success Apr 10 14:21:34.103: INFO: Pod "downward-api-0dd78e31-318d-4f8a-be8e-524fb4876d03" satisfied condition "success or failure" Apr 10 14:21:34.106: INFO: Trying to get logs from node iruya-worker2 pod downward-api-0dd78e31-318d-4f8a-be8e-524fb4876d03 container dapi-container: STEP: delete the pod Apr 10 14:21:34.124: INFO: Waiting for pod downward-api-0dd78e31-318d-4f8a-be8e-524fb4876d03 to disappear Apr 10 14:21:34.128: INFO: Pod downward-api-0dd78e31-318d-4f8a-be8e-524fb4876d03 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:21:34.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4502" for this suite. Apr 10 14:21:40.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:21:40.263: INFO: namespace downward-api-4502 deletion completed in 6.13252112s • [SLOW TEST:10.238 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:21:40.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 14:21:40.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d05bd6cf-8ff7-4c1f-8a61-1ee1083bd9ed" in namespace "projected-950" to be "success or failure" Apr 10 14:21:40.320: INFO: Pod "downwardapi-volume-d05bd6cf-8ff7-4c1f-8a61-1ee1083bd9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.683427ms Apr 10 14:21:42.324: INFO: Pod "downwardapi-volume-d05bd6cf-8ff7-4c1f-8a61-1ee1083bd9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007780812s Apr 10 14:21:44.329: INFO: Pod "downwardapi-volume-d05bd6cf-8ff7-4c1f-8a61-1ee1083bd9ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012333719s STEP: Saw pod success Apr 10 14:21:44.329: INFO: Pod "downwardapi-volume-d05bd6cf-8ff7-4c1f-8a61-1ee1083bd9ed" satisfied condition "success or failure" Apr 10 14:21:44.332: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d05bd6cf-8ff7-4c1f-8a61-1ee1083bd9ed container client-container: STEP: delete the pod Apr 10 14:21:44.351: INFO: Waiting for pod downwardapi-volume-d05bd6cf-8ff7-4c1f-8a61-1ee1083bd9ed to disappear Apr 10 14:21:44.355: INFO: Pod downwardapi-volume-d05bd6cf-8ff7-4c1f-8a61-1ee1083bd9ed no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:21:44.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-950" for this suite. Apr 10 14:21:50.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:21:50.500: INFO: namespace projected-950 deletion completed in 6.140783055s • [SLOW TEST:10.237 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:21:50.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 14:21:50.578: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.071118ms) Apr 10 14:21:50.581: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.536625ms) Apr 10 14:21:50.585: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.858953ms) Apr 10 14:21:50.588: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.215928ms) Apr 10 14:21:50.591: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.534847ms) Apr 10 14:21:50.594: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.703363ms) Apr 10 14:21:50.596: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.208468ms) Apr 10 14:21:50.598: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.334091ms) Apr 10 14:21:50.601: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.354839ms) Apr 10 14:21:50.603: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.284796ms) Apr 10 14:21:50.606: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.87493ms) Apr 10 14:21:50.609: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.57998ms) Apr 10 14:21:50.611: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.476829ms) Apr 10 14:21:50.614: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.907402ms) Apr 10 14:21:50.617: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.105421ms) Apr 10 14:21:50.620: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.707444ms) Apr 10 14:21:50.623: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.7542ms) Apr 10 14:21:50.626: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.231337ms) Apr 10 14:21:50.629: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.214728ms) Apr 10 14:21:50.632: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.035586ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:21:50.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4818" for this suite. Apr 10 14:21:56.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:21:56.734: INFO: namespace proxy-4818 deletion completed in 6.098392183s • [SLOW TEST:6.234 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:21:56.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 10 14:21:56.856: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:21:56.871: INFO: Number of nodes with available pods: 0 Apr 10 14:21:56.871: INFO: Node iruya-worker is running more than one daemon pod Apr 10 14:21:57.877: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:21:57.880: INFO: Number of nodes with available pods: 0 Apr 10 14:21:57.880: INFO: Node iruya-worker is running more than one daemon pod Apr 10 14:21:58.956: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:21:58.959: INFO: Number of nodes with available pods: 0 Apr 10 14:21:58.959: INFO: Node iruya-worker is running more than one daemon pod Apr 10 14:21:59.878: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:21:59.881: INFO: Number of nodes with available pods: 0 Apr 10 14:21:59.881: INFO: Node iruya-worker is running more than one daemon pod Apr 10 14:22:00.880: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:22:00.897: INFO: Number of nodes with available pods: 2 Apr 10 14:22:00.897: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 10 14:22:00.931: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:22:00.934: INFO: Number of nodes with available pods: 1 Apr 10 14:22:00.934: INFO: Node iruya-worker2 is running more than one daemon pod Apr 10 14:22:01.939: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:22:01.942: INFO: Number of nodes with available pods: 1 Apr 10 14:22:01.942: INFO: Node iruya-worker2 is running more than one daemon pod Apr 10 14:22:02.979: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:22:02.982: INFO: Number of nodes with available pods: 1 Apr 10 14:22:02.982: INFO: Node iruya-worker2 is running more than one daemon pod Apr 10 14:22:03.946: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:22:03.975: INFO: Number of nodes with available pods: 1 Apr 10 14:22:03.975: INFO: Node iruya-worker2 is running more than one daemon pod Apr 10 14:22:04.939: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:22:04.942: INFO: Number of nodes with available pods: 1 Apr 10 14:22:04.942: INFO: Node iruya-worker2 is running more than one daemon pod Apr 10 14:22:05.940: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:22:05.943: INFO: Number of nodes with available pods: 1 Apr 10 14:22:05.943: INFO: Node iruya-worker2 is running more than one daemon pod Apr 10 14:22:06.939: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:22:06.943: INFO: Number of nodes with available pods: 2 Apr 10 14:22:06.943: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8941, will wait for the garbage collector to delete the pods Apr 10 14:22:07.006: INFO: Deleting DaemonSet.extensions daemon-set took: 7.631563ms Apr 10 14:22:07.306: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.273922ms Apr 10 14:22:21.910: INFO: Number of nodes with available pods: 0 Apr 10 14:22:21.910: INFO: Number of running nodes: 0, number of available pods: 0 Apr 10 14:22:21.913: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8941/daemonsets","resourceVersion":"4677908"},"items":null} Apr 10 14:22:21.916: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8941/pods","resourceVersion":"4677908"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:22:21.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8941" for this suite. Apr 10 14:22:27.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:22:28.041: INFO: namespace daemonsets-8941 deletion completed in 6.112169564s • [SLOW TEST:31.306 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:22:28.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-c78ef224-94c8-44ef-90c8-20a7a8045033 STEP: Creating secret with name secret-projected-all-test-volume-333b1148-10a4-4919-96f6-8f1a9dbec1fa STEP: Creating a pod to test Check all projections for projected volume plugin Apr 10 14:22:28.144: INFO: Waiting up to 5m0s for pod "projected-volume-0ee416ec-b131-4b10-8ed0-3f4f450f2575" in namespace "projected-9850" to be "success or failure" Apr 10 14:22:28.153: INFO: Pod "projected-volume-0ee416ec-b131-4b10-8ed0-3f4f450f2575": Phase="Pending", Reason="", readiness=false. Elapsed: 9.105901ms Apr 10 14:22:30.167: INFO: Pod "projected-volume-0ee416ec-b131-4b10-8ed0-3f4f450f2575": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023263571s Apr 10 14:22:32.171: INFO: Pod "projected-volume-0ee416ec-b131-4b10-8ed0-3f4f450f2575": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027295439s STEP: Saw pod success Apr 10 14:22:32.171: INFO: Pod "projected-volume-0ee416ec-b131-4b10-8ed0-3f4f450f2575" satisfied condition "success or failure" Apr 10 14:22:32.175: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-0ee416ec-b131-4b10-8ed0-3f4f450f2575 container projected-all-volume-test: STEP: delete the pod Apr 10 14:22:32.192: INFO: Waiting for pod projected-volume-0ee416ec-b131-4b10-8ed0-3f4f450f2575 to disappear Apr 10 14:22:32.197: INFO: Pod projected-volume-0ee416ec-b131-4b10-8ed0-3f4f450f2575 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:22:32.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9850" for this suite. Apr 10 14:22:38.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:22:38.294: INFO: namespace projected-9850 deletion completed in 6.093937969s • [SLOW TEST:10.253 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:22:38.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 14:22:38.345: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 10 14:22:38.364: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 10 14:22:43.369: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 10 14:22:43.370: INFO: Creating deployment "test-rolling-update-deployment" Apr 10 14:22:43.375: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 10 14:22:43.380: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 10 14:22:45.388: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 10 14:22:45.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722125363, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722125363, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722125363, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722125363, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 14:22:47.395: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 10 14:22:47.406: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7950,SelfLink:/apis/apps/v1/namespaces/deployment-7950/deployments/test-rolling-update-deployment,UID:e5cb6c90-6845-4df0-b192-f073a0fe494e,ResourceVersion:4678048,Generation:1,CreationTimestamp:2020-04-10 14:22:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-10 14:22:43 +0000 UTC 2020-04-10 14:22:43 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-10 14:22:46 +0000 UTC 2020-04-10 14:22:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 10 14:22:47.409: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-7950,SelfLink:/apis/apps/v1/namespaces/deployment-7950/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:17057433-e81f-4afd-918a-93281563e219,ResourceVersion:4678037,Generation:1,CreationTimestamp:2020-04-10 14:22:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e5cb6c90-6845-4df0-b192-f073a0fe494e 0xc002e7e847 0xc002e7e848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 10 14:22:47.409: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 10 14:22:47.409: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7950,SelfLink:/apis/apps/v1/namespaces/deployment-7950/replicasets/test-rolling-update-controller,UID:81a581b0-51a5-410d-8417-4742ee1b14cb,ResourceVersion:4678046,Generation:2,CreationTimestamp:2020-04-10 14:22:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e5cb6c90-6845-4df0-b192-f073a0fe494e 0xc002e7e777 0xc002e7e778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 10 14:22:47.411: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-wh67t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-wh67t,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-7950,SelfLink:/api/v1/namespaces/deployment-7950/pods/test-rolling-update-deployment-79f6b9d75c-wh67t,UID:5daf8f51-2d04-4d56-ba25-0ddef09ff3f3,ResourceVersion:4678036,Generation:0,CreationTimestamp:2020-04-10 14:22:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 17057433-e81f-4afd-918a-93281563e219 0xc00328bda7 0xc00328bda8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-spj2p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-spj2p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-spj2p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00328be20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00328be40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:22:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:22:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:22:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:22:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.241,StartTime:2020-04-10 14:22:43 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-10 14:22:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://5de4dc3c46967621f183e5ab91caef618f2cefee806b35f369677c26170023eb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:22:47.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7950" for this suite. Apr 10 14:22:53.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:22:53.504: INFO: namespace deployment-7950 deletion completed in 6.089888673s • [SLOW TEST:15.209 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:22:53.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 10 14:22:58.117: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d68b0461-707f-4e62-a821-731d79e0a0f5" Apr 10 14:22:58.118: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d68b0461-707f-4e62-a821-731d79e0a0f5" in namespace "pods-2931" to be "terminated due to deadline exceeded" Apr 10 14:22:58.124: INFO: Pod "pod-update-activedeadlineseconds-d68b0461-707f-4e62-a821-731d79e0a0f5": Phase="Running", Reason="", readiness=true. Elapsed: 6.277336ms Apr 10 14:23:00.128: INFO: Pod "pod-update-activedeadlineseconds-d68b0461-707f-4e62-a821-731d79e0a0f5": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.010312427s Apr 10 14:23:00.128: INFO: Pod "pod-update-activedeadlineseconds-d68b0461-707f-4e62-a821-731d79e0a0f5" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:23:00.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2931" for this suite. Apr 10 14:23:06.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:23:06.227: INFO: namespace pods-2931 deletion completed in 6.094986373s • [SLOW TEST:12.720 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:23:06.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-247ef36a-5be2-42c9-bbe2-d2de30cda12e STEP: Creating secret with name s-test-opt-upd-b077e008-1592-494f-827a-4c8ad09c3a1e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-247ef36a-5be2-42c9-bbe2-d2de30cda12e STEP: Updating secret s-test-opt-upd-b077e008-1592-494f-827a-4c8ad09c3a1e STEP: Creating secret with name s-test-opt-create-75d9c182-a110-46fe-9a8b-1dcf4ea982e2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:24:40.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2751" for this suite. Apr 10 14:25:02.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:25:02.915: INFO: namespace projected-2751 deletion completed in 22.087829921s • [SLOW TEST:116.687 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:25:02.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:25:29.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9762" for this suite. Apr 10 14:25:35.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:25:35.250: INFO: namespace namespaces-9762 deletion completed in 6.097201256s STEP: Destroying namespace "nsdeletetest-2420" for this suite. Apr 10 14:25:35.251: INFO: Namespace nsdeletetest-2420 was already deleted STEP: Destroying namespace "nsdeletetest-2049" for this suite. Apr 10 14:25:41.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:25:41.403: INFO: namespace nsdeletetest-2049 deletion completed in 6.151056827s • [SLOW TEST:38.487 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:25:41.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 10 14:25:41.436: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:25:48.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3278" for this suite. Apr 10 14:25:54.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:25:54.480: INFO: namespace init-container-3278 deletion completed in 6.09090774s • [SLOW TEST:13.077 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:25:54.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-1d3e5cbb-e269-427d-8550-b1dcdbd74200 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-1d3e5cbb-e269-427d-8550-b1dcdbd74200 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:27:14.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8255" for this suite. Apr 10 14:27:37.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:27:37.095: INFO: namespace projected-8255 deletion completed in 22.112281063s • [SLOW TEST:102.614 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:27:37.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 10 14:27:41.172: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-052aaade-aa96-4843-88e1-901434b1b7df,GenerateName:,Namespace:events-8201,SelfLink:/api/v1/namespaces/events-8201/pods/send-events-052aaade-aa96-4843-88e1-901434b1b7df,UID:6a01b127-774a-43b2-b382-a64f6fdd4d6b,ResourceVersion:4678838,Generation:0,CreationTimestamp:2020-04-10 14:27:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 134020254,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-48bzq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-48bzq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-48bzq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fbd300} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fbd320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:27:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:27:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:27:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:27:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.245,StartTime:2020-04-10 14:27:37 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-10 14:27:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://9c60f63b83e3e02781c47c1806ed962bae4bc7c743f0a905b2115e4361f10372}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 10 14:27:43.178: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 10 14:27:45.183: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:27:45.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8201" for this suite. Apr 10 14:28:23.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:28:23.316: INFO: namespace events-8201 deletion completed in 38.123189103s • [SLOW TEST:46.220 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:28:23.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4582 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-4582 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4582 Apr 10 14:28:23.433: INFO: Found 0 stateful pods, waiting for 1 Apr 10 14:28:33.439: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 10 14:28:33.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4582 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 10 14:28:35.850: INFO: stderr: "I0410 14:28:35.735781 2767 log.go:172] (0xc000116e70) (0xc0008e4780) Create stream\nI0410 14:28:35.735815 2767 log.go:172] (0xc000116e70) (0xc0008e4780) Stream added, broadcasting: 1\nI0410 14:28:35.738955 2767 log.go:172] (0xc000116e70) Reply frame received for 1\nI0410 14:28:35.738980 2767 log.go:172] (0xc000116e70) (0xc0003dc460) Create stream\nI0410 14:28:35.738990 2767 log.go:172] (0xc000116e70) (0xc0003dc460) Stream added, broadcasting: 3\nI0410 14:28:35.739954 2767 log.go:172] (0xc000116e70) Reply frame received for 3\nI0410 14:28:35.740014 2767 log.go:172] (0xc000116e70) (0xc0008ea000) Create stream\nI0410 14:28:35.740047 2767 log.go:172] (0xc000116e70) (0xc0008ea000) Stream added, broadcasting: 5\nI0410 14:28:35.740950 2767 log.go:172] (0xc000116e70) Reply frame received for 5\nI0410 14:28:35.813639 2767 log.go:172] (0xc000116e70) Data frame received for 5\nI0410 14:28:35.813684 2767 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0410 14:28:35.813707 2767 log.go:172] (0xc0008ea000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0410 14:28:35.843474 2767 log.go:172] (0xc000116e70) Data frame received for 3\nI0410 14:28:35.843515 2767 log.go:172] (0xc0003dc460) (3) Data frame handling\nI0410 14:28:35.843549 2767 log.go:172] (0xc0003dc460) (3) Data frame sent\nI0410 14:28:35.843742 2767 log.go:172] (0xc000116e70) Data frame received for 5\nI0410 14:28:35.843769 2767 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0410 14:28:35.843900 2767 log.go:172] (0xc000116e70) Data frame received for 3\nI0410 14:28:35.843928 2767 log.go:172] (0xc0003dc460) (3) Data frame handling\nI0410 14:28:35.845943 2767 log.go:172] (0xc000116e70) Data frame received for 1\nI0410 14:28:35.845977 2767 log.go:172] (0xc0008e4780) (1) Data frame handling\nI0410 14:28:35.846005 2767 log.go:172] (0xc0008e4780) (1) Data frame sent\nI0410 14:28:35.846052 2767 log.go:172] (0xc000116e70) (0xc0008e4780) Stream removed, broadcasting: 1\nI0410 14:28:35.846144 2767 log.go:172] (0xc000116e70) Go away received\nI0410 14:28:35.846577 2767 log.go:172] (0xc000116e70) (0xc0008e4780) Stream removed, broadcasting: 1\nI0410 14:28:35.846596 2767 log.go:172] (0xc000116e70) (0xc0003dc460) Stream removed, broadcasting: 3\nI0410 14:28:35.846607 2767 log.go:172] (0xc000116e70) (0xc0008ea000) Stream removed, broadcasting: 5\n" Apr 10 14:28:35.850: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 10 14:28:35.850: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 10 14:28:35.854: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 10 14:28:45.859: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 10 14:28:45.859: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 14:28:45.875: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 14:28:45.875: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC }] Apr 10 14:28:45.876: INFO: Apr 10 14:28:45.876: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 10 14:28:46.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992708988s Apr 10 14:28:47.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.955482071s Apr 10 14:28:48.922: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.949826701s Apr 10 14:28:49.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.946143097s Apr 10 14:28:50.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.941947871s Apr 10 14:28:51.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.936486456s Apr 10 14:28:52.945: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.929155215s Apr 10 14:28:53.949: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.92350193s Apr 10 14:28:54.954: INFO: Verifying statefulset ss doesn't scale past 3 for another 918.943653ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4582 Apr 10 14:28:55.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4582 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 14:28:56.183: INFO: stderr: "I0410 14:28:56.098175 2800 log.go:172] (0xc0006e0000) (0xc0007be1e0) Create stream\nI0410 14:28:56.098227 2800 log.go:172] (0xc0006e0000) (0xc0007be1e0) Stream added, broadcasting: 1\nI0410 14:28:56.100609 2800 log.go:172] (0xc0006e0000) Reply frame received for 1\nI0410 14:28:56.100647 2800 log.go:172] (0xc0006e0000) (0xc000684280) Create stream\nI0410 14:28:56.100659 2800 log.go:172] (0xc0006e0000) (0xc000684280) Stream added, broadcasting: 3\nI0410 14:28:56.101897 2800 log.go:172] (0xc0006e0000) Reply frame received for 3\nI0410 14:28:56.101956 2800 log.go:172] (0xc0006e0000) (0xc0001b4000) Create stream\nI0410 14:28:56.101972 2800 log.go:172] (0xc0006e0000) (0xc0001b4000) Stream added, broadcasting: 5\nI0410 14:28:56.103008 2800 log.go:172] (0xc0006e0000) Reply frame received for 5\nI0410 14:28:56.176614 2800 log.go:172] (0xc0006e0000) Data frame received for 3\nI0410 14:28:56.176649 2800 log.go:172] (0xc000684280) (3) Data frame handling\nI0410 14:28:56.176674 2800 log.go:172] (0xc000684280) (3) Data frame sent\nI0410 14:28:56.176689 2800 log.go:172] (0xc0006e0000) Data frame received for 3\nI0410 14:28:56.176701 2800 log.go:172] (0xc000684280) (3) Data frame handling\nI0410 14:28:56.176974 2800 log.go:172] (0xc0006e0000) Data frame received for 5\nI0410 14:28:56.177007 2800 log.go:172] (0xc0001b4000) (5) Data frame handling\nI0410 14:28:56.177033 2800 log.go:172] (0xc0001b4000) (5) Data frame sent\nI0410 14:28:56.177057 2800 log.go:172] (0xc0006e0000) Data frame received for 5\nI0410 14:28:56.177076 2800 log.go:172] (0xc0001b4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0410 14:28:56.178849 2800 log.go:172] (0xc0006e0000) Data frame received for 1\nI0410 14:28:56.178883 2800 log.go:172] (0xc0007be1e0) (1) Data frame handling\nI0410 14:28:56.178910 2800 log.go:172] (0xc0007be1e0) (1) Data frame sent\nI0410 14:28:56.178944 2800 log.go:172] (0xc0006e0000) (0xc0007be1e0) Stream removed, broadcasting: 1\nI0410 14:28:56.178984 2800 log.go:172] (0xc0006e0000) Go away received\nI0410 14:28:56.179348 2800 log.go:172] (0xc0006e0000) (0xc0007be1e0) Stream removed, broadcasting: 1\nI0410 14:28:56.179377 2800 log.go:172] (0xc0006e0000) (0xc000684280) Stream removed, broadcasting: 3\nI0410 14:28:56.179390 2800 log.go:172] (0xc0006e0000) (0xc0001b4000) Stream removed, broadcasting: 5\n" Apr 10 14:28:56.184: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 10 14:28:56.184: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 10 14:28:56.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4582 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 14:28:56.414: INFO: stderr: "I0410 14:28:56.337107 2820 log.go:172] (0xc000a5c370) (0xc00097e640) Create stream\nI0410 14:28:56.337251 2820 log.go:172] (0xc000a5c370) (0xc00097e640) Stream added, broadcasting: 1\nI0410 14:28:56.339947 2820 log.go:172] (0xc000a5c370) Reply frame received for 1\nI0410 14:28:56.339977 2820 log.go:172] (0xc000a5c370) (0xc000a5a000) Create stream\nI0410 14:28:56.339987 2820 log.go:172] (0xc000a5c370) (0xc000a5a000) Stream added, broadcasting: 3\nI0410 14:28:56.340858 2820 log.go:172] (0xc000a5c370) Reply frame received for 3\nI0410 14:28:56.340897 2820 log.go:172] (0xc000a5c370) (0xc00097e6e0) Create stream\nI0410 14:28:56.340911 2820 log.go:172] (0xc000a5c370) (0xc00097e6e0) Stream added, broadcasting: 5\nI0410 14:28:56.342077 2820 log.go:172] (0xc000a5c370) Reply frame received for 5\nI0410 14:28:56.407697 2820 log.go:172] (0xc000a5c370) Data frame received for 5\nI0410 14:28:56.407734 2820 log.go:172] (0xc000a5c370) Data frame received for 3\nI0410 14:28:56.407758 2820 log.go:172] (0xc000a5a000) (3) Data frame handling\nI0410 14:28:56.407769 2820 log.go:172] (0xc000a5a000) (3) Data frame sent\nI0410 14:28:56.407775 2820 log.go:172] (0xc000a5c370) Data frame received for 3\nI0410 14:28:56.407783 2820 log.go:172] (0xc000a5a000) (3) Data frame handling\nI0410 14:28:56.407841 2820 log.go:172] (0xc00097e6e0) (5) Data frame handling\nI0410 14:28:56.407903 2820 log.go:172] (0xc00097e6e0) (5) Data frame sent\nI0410 14:28:56.407927 2820 log.go:172] (0xc000a5c370) Data frame received for 5\nI0410 14:28:56.407943 2820 log.go:172] (0xc00097e6e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0410 14:28:56.409925 2820 log.go:172] (0xc000a5c370) Data frame received for 1\nI0410 14:28:56.409948 2820 log.go:172] (0xc00097e640) (1) Data frame handling\nI0410 14:28:56.409963 2820 log.go:172] (0xc00097e640) (1) Data frame sent\nI0410 14:28:56.409985 2820 log.go:172] (0xc000a5c370) (0xc00097e640) Stream removed, broadcasting: 1\nI0410 14:28:56.410006 2820 log.go:172] (0xc000a5c370) Go away received\nI0410 14:28:56.410421 2820 log.go:172] (0xc000a5c370) (0xc00097e640) Stream removed, broadcasting: 1\nI0410 14:28:56.410442 2820 log.go:172] (0xc000a5c370) (0xc000a5a000) Stream removed, broadcasting: 3\nI0410 14:28:56.410452 2820 log.go:172] (0xc000a5c370) (0xc00097e6e0) Stream removed, broadcasting: 5\n" Apr 10 14:28:56.414: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 10 14:28:56.414: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 10 14:28:56.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4582 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 10 14:28:56.614: INFO: stderr: "I0410 14:28:56.539395 2841 log.go:172] (0xc000a10840) (0xc0009a0820) Create stream\nI0410 14:28:56.539456 2841 log.go:172] (0xc000a10840) (0xc0009a0820) Stream added, broadcasting: 1\nI0410 14:28:56.542758 2841 log.go:172] (0xc000a10840) Reply frame received for 1\nI0410 14:28:56.542831 2841 log.go:172] (0xc000a10840) (0xc000a12000) Create stream\nI0410 14:28:56.542847 2841 log.go:172] (0xc000a10840) (0xc000a12000) Stream added, broadcasting: 3\nI0410 14:28:56.543727 2841 log.go:172] (0xc000a10840) Reply frame received for 3\nI0410 14:28:56.543760 2841 log.go:172] (0xc000a10840) (0xc000a12140) Create stream\nI0410 14:28:56.543771 2841 log.go:172] (0xc000a10840) (0xc000a12140) Stream added, broadcasting: 5\nI0410 14:28:56.544709 2841 log.go:172] (0xc000a10840) Reply frame received for 5\nI0410 14:28:56.609288 2841 log.go:172] (0xc000a10840) Data frame received for 5\nI0410 14:28:56.609472 2841 log.go:172] (0xc000a12140) (5) Data frame handling\nI0410 14:28:56.609508 2841 log.go:172] (0xc000a12140) (5) Data frame sent\nI0410 14:28:56.609525 2841 log.go:172] (0xc000a10840) Data frame received for 5\nI0410 14:28:56.609535 2841 log.go:172] (0xc000a12140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0410 14:28:56.609560 2841 log.go:172] (0xc000a10840) Data frame received for 3\nI0410 14:28:56.609574 2841 log.go:172] (0xc000a12000) (3) Data frame handling\nI0410 14:28:56.609598 2841 log.go:172] (0xc000a12000) (3) Data frame sent\nI0410 14:28:56.609621 2841 log.go:172] (0xc000a10840) Data frame received for 3\nI0410 14:28:56.609645 2841 log.go:172] (0xc000a12000) (3) Data frame handling\nI0410 14:28:56.610885 2841 log.go:172] (0xc000a10840) Data frame received for 1\nI0410 14:28:56.610924 2841 log.go:172] (0xc0009a0820) (1) Data frame handling\nI0410 14:28:56.610947 2841 log.go:172] (0xc0009a0820) (1) Data frame sent\nI0410 14:28:56.610968 2841 log.go:172] (0xc000a10840) (0xc0009a0820) Stream removed, broadcasting: 1\nI0410 14:28:56.610988 2841 log.go:172] (0xc000a10840) Go away received\nI0410 14:28:56.611554 2841 log.go:172] (0xc000a10840) (0xc0009a0820) Stream removed, broadcasting: 1\nI0410 14:28:56.611579 2841 log.go:172] (0xc000a10840) (0xc000a12000) Stream removed, broadcasting: 3\nI0410 14:28:56.611590 2841 log.go:172] (0xc000a10840) (0xc000a12140) Stream removed, broadcasting: 5\n" Apr 10 14:28:56.614: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 10 14:28:56.614: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 10 14:28:56.619: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 10 14:29:06.624: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 14:29:06.624: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 14:29:06.624: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 10 14:29:06.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4582 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 10 14:29:06.854: INFO: stderr: "I0410 14:29:06.753458 2864 log.go:172] (0xc0006eac60) (0xc0006feb40) Create stream\nI0410 14:29:06.753523 2864 log.go:172] (0xc0006eac60) (0xc0006feb40) Stream added, broadcasting: 1\nI0410 14:29:06.756739 2864 log.go:172] (0xc0006eac60) Reply frame received for 1\nI0410 14:29:06.756811 2864 log.go:172] (0xc0006eac60) (0xc0009e8000) Create stream\nI0410 14:29:06.756847 2864 log.go:172] (0xc0006eac60) (0xc0009e8000) Stream added, broadcasting: 3\nI0410 14:29:06.757982 2864 log.go:172] (0xc0006eac60) Reply frame received for 3\nI0410 14:29:06.758041 2864 log.go:172] (0xc0006eac60) (0xc0009e80a0) Create stream\nI0410 14:29:06.758070 2864 log.go:172] (0xc0006eac60) (0xc0009e80a0) Stream added, broadcasting: 5\nI0410 14:29:06.759052 2864 log.go:172] (0xc0006eac60) Reply frame received for 5\nI0410 14:29:06.843759 2864 log.go:172] (0xc0006eac60) Data frame received for 3\nI0410 14:29:06.843812 2864 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0410 14:29:06.843830 2864 log.go:172] (0xc0009e8000) (3) Data frame sent\nI0410 14:29:06.843843 2864 log.go:172] (0xc0006eac60) Data frame received for 3\nI0410 14:29:06.843854 2864 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0410 14:29:06.843940 2864 log.go:172] (0xc0006eac60) Data frame received for 5\nI0410 14:29:06.843996 2864 log.go:172] (0xc0009e80a0) (5) Data frame handling\nI0410 14:29:06.844010 2864 log.go:172] (0xc0009e80a0) (5) Data frame sent\nI0410 14:29:06.844021 2864 log.go:172] (0xc0006eac60) Data frame received for 5\nI0410 14:29:06.844026 2864 log.go:172] (0xc0009e80a0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0410 14:29:06.845724 2864 log.go:172] (0xc0006eac60) Data frame received for 1\nI0410 14:29:06.845755 2864 log.go:172] (0xc0006feb40) (1) Data frame handling\nI0410 14:29:06.845765 2864 log.go:172] (0xc0006feb40) (1) Data frame sent\nI0410 14:29:06.845778 2864 log.go:172] (0xc0006eac60) (0xc0006feb40) Stream removed, broadcasting: 1\nI0410 14:29:06.846381 2864 log.go:172] (0xc0006eac60) Go away received\nI0410 14:29:06.846851 2864 log.go:172] (0xc0006eac60) (0xc0006feb40) Stream removed, broadcasting: 1\nI0410 14:29:06.846891 2864 log.go:172] (0xc0006eac60) (0xc0009e8000) Stream removed, broadcasting: 3\nI0410 14:29:06.846934 2864 log.go:172] (0xc0006eac60) (0xc0009e80a0) Stream removed, broadcasting: 5\n" Apr 10 14:29:06.854: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 10 14:29:06.854: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 10 14:29:06.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4582 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 10 14:29:07.093: INFO: stderr: "I0410 14:29:06.997717 2888 log.go:172] (0xc0007249a0) (0xc00035a8c0) Create stream\nI0410 14:29:06.997762 2888 log.go:172] (0xc0007249a0) (0xc00035a8c0) Stream added, broadcasting: 1\nI0410 14:29:07.000022 2888 log.go:172] (0xc0007249a0) Reply frame received for 1\nI0410 14:29:07.000061 2888 log.go:172] (0xc0007249a0) (0xc000812000) Create stream\nI0410 14:29:07.000080 2888 log.go:172] (0xc0007249a0) (0xc000812000) Stream added, broadcasting: 3\nI0410 14:29:07.001284 2888 log.go:172] (0xc0007249a0) Reply frame received for 3\nI0410 14:29:07.001316 2888 log.go:172] (0xc0007249a0) (0xc00035a960) Create stream\nI0410 14:29:07.001324 2888 log.go:172] (0xc0007249a0) (0xc00035a960) Stream added, broadcasting: 5\nI0410 14:29:07.002176 2888 log.go:172] (0xc0007249a0) Reply frame received for 5\nI0410 14:29:07.053957 2888 log.go:172] (0xc0007249a0) Data frame received for 5\nI0410 14:29:07.054091 2888 log.go:172] (0xc00035a960) (5) Data frame handling\nI0410 14:29:07.054145 2888 log.go:172] (0xc00035a960) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0410 14:29:07.084655 2888 log.go:172] (0xc0007249a0) Data frame received for 3\nI0410 14:29:07.084684 2888 log.go:172] (0xc000812000) (3) Data frame handling\nI0410 14:29:07.084713 2888 log.go:172] (0xc000812000) (3) Data frame sent\nI0410 14:29:07.085046 2888 log.go:172] (0xc0007249a0) Data frame received for 5\nI0410 14:29:07.085079 2888 log.go:172] (0xc00035a960) (5) Data frame handling\nI0410 14:29:07.085101 2888 log.go:172] (0xc0007249a0) Data frame received for 3\nI0410 14:29:07.085277 2888 log.go:172] (0xc000812000) (3) Data frame handling\nI0410 14:29:07.087438 2888 log.go:172] (0xc0007249a0) Data frame received for 1\nI0410 14:29:07.087469 2888 log.go:172] (0xc00035a8c0) (1) Data frame handling\nI0410 14:29:07.087487 2888 log.go:172] (0xc00035a8c0) (1) Data frame sent\nI0410 14:29:07.087505 2888 log.go:172] (0xc0007249a0) (0xc00035a8c0) Stream removed, broadcasting: 1\nI0410 14:29:07.087533 2888 log.go:172] (0xc0007249a0) Go away received\nI0410 14:29:07.088035 2888 log.go:172] (0xc0007249a0) (0xc00035a8c0) Stream removed, broadcasting: 1\nI0410 14:29:07.088075 2888 log.go:172] (0xc0007249a0) (0xc000812000) Stream removed, broadcasting: 3\nI0410 14:29:07.088099 2888 log.go:172] (0xc0007249a0) (0xc00035a960) Stream removed, broadcasting: 5\n" Apr 10 14:29:07.093: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 10 14:29:07.093: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 10 14:29:07.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4582 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 10 14:29:07.308: INFO: stderr: "I0410 14:29:07.216237 2910 log.go:172] (0xc0009be630) (0xc00010ebe0) Create stream\nI0410 14:29:07.216291 2910 log.go:172] (0xc0009be630) (0xc00010ebe0) Stream added, broadcasting: 1\nI0410 14:29:07.219726 2910 log.go:172] (0xc0009be630) Reply frame received for 1\nI0410 14:29:07.219763 2910 log.go:172] (0xc0009be630) (0xc00010e320) Create stream\nI0410 14:29:07.219777 2910 log.go:172] (0xc0009be630) (0xc00010e320) Stream added, broadcasting: 3\nI0410 14:29:07.220715 2910 log.go:172] (0xc0009be630) Reply frame received for 3\nI0410 14:29:07.220752 2910 log.go:172] (0xc0009be630) (0xc00010e3c0) Create stream\nI0410 14:29:07.220761 2910 log.go:172] (0xc0009be630) (0xc00010e3c0) Stream added, broadcasting: 5\nI0410 14:29:07.221725 2910 log.go:172] (0xc0009be630) Reply frame received for 5\nI0410 14:29:07.277095 2910 log.go:172] (0xc0009be630) Data frame received for 5\nI0410 14:29:07.277260 2910 log.go:172] (0xc00010e3c0) (5) Data frame handling\nI0410 14:29:07.277283 2910 log.go:172] (0xc00010e3c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0410 14:29:07.300775 2910 log.go:172] (0xc0009be630) Data frame received for 5\nI0410 14:29:07.300808 2910 log.go:172] (0xc00010e3c0) (5) Data frame handling\nI0410 14:29:07.300838 2910 log.go:172] (0xc0009be630) Data frame received for 3\nI0410 14:29:07.300855 2910 log.go:172] (0xc00010e320) (3) Data frame handling\nI0410 14:29:07.300877 2910 log.go:172] (0xc00010e320) (3) Data frame sent\nI0410 14:29:07.300894 2910 log.go:172] (0xc0009be630) Data frame received for 3\nI0410 14:29:07.300907 2910 log.go:172] (0xc00010e320) (3) Data frame handling\nI0410 14:29:07.302971 2910 log.go:172] (0xc0009be630) Data frame received for 1\nI0410 14:29:07.303101 2910 log.go:172] (0xc00010ebe0) (1) Data frame handling\nI0410 14:29:07.303135 2910 log.go:172] (0xc00010ebe0) (1) Data frame sent\nI0410 14:29:07.303163 2910 log.go:172] (0xc0009be630) (0xc00010ebe0) Stream removed, broadcasting: 1\nI0410 14:29:07.303253 2910 log.go:172] (0xc0009be630) Go away received\nI0410 14:29:07.303534 2910 log.go:172] (0xc0009be630) (0xc00010ebe0) Stream removed, broadcasting: 1\nI0410 14:29:07.303558 2910 log.go:172] (0xc0009be630) (0xc00010e320) Stream removed, broadcasting: 3\nI0410 14:29:07.303570 2910 log.go:172] (0xc0009be630) (0xc00010e3c0) Stream removed, broadcasting: 5\n" Apr 10 14:29:07.308: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 10 14:29:07.308: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 10 14:29:07.308: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 14:29:07.311: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 10 14:29:17.321: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 10 14:29:17.321: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 10 14:29:17.322: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 10 14:29:17.338: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 14:29:17.338: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC }] Apr 10 14:29:17.338: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC }] Apr 10 14:29:17.338: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC }] Apr 10 14:29:17.338: INFO: Apr 10 14:29:17.338: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 10 14:29:18.343: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 14:29:18.343: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC }] Apr 10 14:29:18.343: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC }] Apr 10 14:29:18.343: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC }] Apr 10 14:29:18.343: INFO: Apr 10 14:29:18.343: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 10 14:29:19.349: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 14:29:19.349: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC }] Apr 10 14:29:19.349: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC }] Apr 10 14:29:19.349: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC }] Apr 10 14:29:19.349: INFO: Apr 10 14:29:19.349: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 10 14:29:20.354: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 14:29:20.354: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC }] Apr 10 14:29:20.354: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC }] Apr 10 14:29:20.354: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC }] Apr 10 14:29:20.354: INFO: Apr 10 14:29:20.354: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 10 14:29:21.360: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 14:29:21.360: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:23 +0000 UTC }] Apr 10 14:29:21.360: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:29:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:28:45 +0000 UTC }] Apr 10 14:29:21.360: INFO: Apr 10 14:29:21.360: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 10 14:29:22.365: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.968381634s Apr 10 14:29:23.369: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.963403961s Apr 10 14:29:24.373: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.95884571s Apr 10 14:29:25.378: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.954760663s Apr 10 14:29:26.382: INFO: Verifying statefulset ss doesn't scale past 0 for another 950.070532ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4582 Apr 10 14:29:27.416: INFO: Scaling statefulset ss to 0 Apr 10 14:29:27.423: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 10 14:29:27.425: INFO: Deleting all statefulset in ns statefulset-4582 Apr 10 14:29:27.452: INFO: Scaling statefulset ss to 0 Apr 10 14:29:27.459: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 14:29:27.461: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:29:27.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4582" for this suite. Apr 10 14:29:33.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:29:33.563: INFO: namespace statefulset-4582 deletion completed in 6.089092169s • [SLOW TEST:70.247 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:29:33.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-6e0e213b-59e1-4bd7-b7d2-3dc34d2ac40a STEP: Creating a pod to test consume configMaps Apr 10 14:29:33.628: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-023a64ee-4d98-4f86-8f82-5fed95da55d2" in namespace "projected-8358" to be "success or failure" Apr 10 14:29:33.674: INFO: Pod "pod-projected-configmaps-023a64ee-4d98-4f86-8f82-5fed95da55d2": Phase="Pending", Reason="", readiness=false. Elapsed: 45.951033ms Apr 10 14:29:35.678: INFO: Pod "pod-projected-configmaps-023a64ee-4d98-4f86-8f82-5fed95da55d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050676245s Apr 10 14:29:37.683: INFO: Pod "pod-projected-configmaps-023a64ee-4d98-4f86-8f82-5fed95da55d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054998284s STEP: Saw pod success Apr 10 14:29:37.683: INFO: Pod "pod-projected-configmaps-023a64ee-4d98-4f86-8f82-5fed95da55d2" satisfied condition "success or failure" Apr 10 14:29:37.686: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-023a64ee-4d98-4f86-8f82-5fed95da55d2 container projected-configmap-volume-test: STEP: delete the pod Apr 10 14:29:37.758: INFO: Waiting for pod pod-projected-configmaps-023a64ee-4d98-4f86-8f82-5fed95da55d2 to disappear Apr 10 14:29:37.770: INFO: Pod pod-projected-configmaps-023a64ee-4d98-4f86-8f82-5fed95da55d2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:29:37.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8358" for this suite. Apr 10 14:29:43.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:29:43.876: INFO: namespace projected-8358 deletion completed in 6.103287832s • [SLOW TEST:10.313 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:29:43.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 10 14:29:48.456: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4144 pod-service-account-bc14fa7c-42ca-4126-b4f3-fd18eb6a7a8a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 10 14:29:48.703: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4144 pod-service-account-bc14fa7c-42ca-4126-b4f3-fd18eb6a7a8a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 10 14:29:48.900: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4144 pod-service-account-bc14fa7c-42ca-4126-b4f3-fd18eb6a7a8a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:29:49.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4144" for this suite. Apr 10 14:29:55.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:29:55.209: INFO: namespace svcaccounts-4144 deletion completed in 6.105671137s • [SLOW TEST:11.332 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:29:55.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 10 14:29:55.292: INFO: Waiting up to 5m0s for pod "pod-56db8eaa-3c36-469e-9688-ae4f1f833d6f" in namespace "emptydir-1620" to be "success or failure" Apr 10 14:29:55.296: INFO: Pod "pod-56db8eaa-3c36-469e-9688-ae4f1f833d6f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.350807ms Apr 10 14:29:57.300: INFO: Pod "pod-56db8eaa-3c36-469e-9688-ae4f1f833d6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007357009s Apr 10 14:29:59.304: INFO: Pod "pod-56db8eaa-3c36-469e-9688-ae4f1f833d6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011950335s STEP: Saw pod success Apr 10 14:29:59.304: INFO: Pod "pod-56db8eaa-3c36-469e-9688-ae4f1f833d6f" satisfied condition "success or failure" Apr 10 14:29:59.308: INFO: Trying to get logs from node iruya-worker2 pod pod-56db8eaa-3c36-469e-9688-ae4f1f833d6f container test-container: STEP: delete the pod Apr 10 14:29:59.328: INFO: Waiting for pod pod-56db8eaa-3c36-469e-9688-ae4f1f833d6f to disappear Apr 10 14:29:59.332: INFO: Pod pod-56db8eaa-3c36-469e-9688-ae4f1f833d6f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:29:59.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1620" for this suite. Apr 10 14:30:05.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:30:05.415: INFO: namespace emptydir-1620 deletion completed in 6.078598783s • [SLOW TEST:10.206 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:30:05.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 10 14:30:05.473: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b4ea329-19d6-4cb2-9cf3-6ec5590b6f6a" in namespace "projected-2801" to be "success or failure" Apr 10 14:30:05.477: INFO: Pod "downwardapi-volume-7b4ea329-19d6-4cb2-9cf3-6ec5590b6f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.495432ms Apr 10 14:30:07.480: INFO: Pod "downwardapi-volume-7b4ea329-19d6-4cb2-9cf3-6ec5590b6f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006801329s Apr 10 14:30:09.485: INFO: Pod "downwardapi-volume-7b4ea329-19d6-4cb2-9cf3-6ec5590b6f6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011429404s STEP: Saw pod success Apr 10 14:30:09.485: INFO: Pod "downwardapi-volume-7b4ea329-19d6-4cb2-9cf3-6ec5590b6f6a" satisfied condition "success or failure" Apr 10 14:30:09.488: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7b4ea329-19d6-4cb2-9cf3-6ec5590b6f6a container client-container: STEP: delete the pod Apr 10 14:30:09.550: INFO: Waiting for pod downwardapi-volume-7b4ea329-19d6-4cb2-9cf3-6ec5590b6f6a to disappear Apr 10 14:30:09.555: INFO: Pod downwardapi-volume-7b4ea329-19d6-4cb2-9cf3-6ec5590b6f6a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:30:09.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2801" for this suite. Apr 10 14:30:15.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:30:15.654: INFO: namespace projected-2801 deletion completed in 6.095164006s • [SLOW TEST:10.239 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:30:15.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-cd626b91-6fa5-4e9d-9d68-c3b28e78098f STEP: Creating a pod to test consume configMaps Apr 10 14:30:15.765: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5e2ab86a-0748-4f72-b564-9050f41635ef" in namespace "projected-2477" to be "success or failure" Apr 10 14:30:15.797: INFO: Pod "pod-projected-configmaps-5e2ab86a-0748-4f72-b564-9050f41635ef": Phase="Pending", Reason="", readiness=false. Elapsed: 31.875876ms Apr 10 14:30:17.801: INFO: Pod "pod-projected-configmaps-5e2ab86a-0748-4f72-b564-9050f41635ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035651649s Apr 10 14:30:19.806: INFO: Pod "pod-projected-configmaps-5e2ab86a-0748-4f72-b564-9050f41635ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040733319s STEP: Saw pod success Apr 10 14:30:19.806: INFO: Pod "pod-projected-configmaps-5e2ab86a-0748-4f72-b564-9050f41635ef" satisfied condition "success or failure" Apr 10 14:30:19.810: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-5e2ab86a-0748-4f72-b564-9050f41635ef container projected-configmap-volume-test: STEP: delete the pod Apr 10 14:30:19.831: INFO: Waiting for pod pod-projected-configmaps-5e2ab86a-0748-4f72-b564-9050f41635ef to disappear Apr 10 14:30:19.896: INFO: Pod pod-projected-configmaps-5e2ab86a-0748-4f72-b564-9050f41635ef no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:30:19.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2477" for this suite. Apr 10 14:30:25.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:30:26.020: INFO: namespace projected-2477 deletion completed in 6.119707624s • [SLOW TEST:10.365 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:30:26.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 10 14:30:31.117: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:30:32.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8279" for this suite. Apr 10 14:30:54.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:30:54.295: INFO: namespace replicaset-8279 deletion completed in 22.115343389s • [SLOW TEST:28.274 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:30:54.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 10 14:30:54.340: INFO: PodSpec: initContainers in spec.initContainers Apr 10 14:31:46.012: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7584b547-f132-4d54-9b90-923d9c21ff5c", GenerateName:"", Namespace:"init-container-1513", SelfLink:"/api/v1/namespaces/init-container-1513/pods/pod-init-7584b547-f132-4d54-9b90-923d9c21ff5c", UID:"93ffdc25-5567-40bd-9231-2cd8caf5ff37", ResourceVersion:"4679701", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722125854, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"340743174"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-r7mhv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001bd3e80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r7mhv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r7mhv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r7mhv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00328ad38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002c46f60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00328adc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00328ade0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00328ade8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00328adec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722125854, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722125854, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722125854, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722125854, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.251", StartTime:(*v1.Time)(0xc001fcb100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001fcb160), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000982850)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://619b977dc3942c1725ab329f1620599c919029453c9304df0dba2553ef297359"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001fcb180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001fcb140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:31:46.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1513" for this suite. Apr 10 14:32:08.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:32:08.118: INFO: namespace init-container-1513 deletion completed in 22.098905821s • [SLOW TEST:73.822 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:32:08.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 10 14:32:08.177: INFO: Waiting up to 5m0s for pod "pod-c4615e5f-17fb-43cd-90e9-0b30a15dcfc7" in namespace "emptydir-1018" to be "success or failure" Apr 10 14:32:08.192: INFO: Pod "pod-c4615e5f-17fb-43cd-90e9-0b30a15dcfc7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.512109ms Apr 10 14:32:10.199: INFO: Pod "pod-c4615e5f-17fb-43cd-90e9-0b30a15dcfc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021853952s Apr 10 14:32:12.203: INFO: Pod "pod-c4615e5f-17fb-43cd-90e9-0b30a15dcfc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025821206s STEP: Saw pod success Apr 10 14:32:12.203: INFO: Pod "pod-c4615e5f-17fb-43cd-90e9-0b30a15dcfc7" satisfied condition "success or failure" Apr 10 14:32:12.206: INFO: Trying to get logs from node iruya-worker2 pod pod-c4615e5f-17fb-43cd-90e9-0b30a15dcfc7 container test-container: STEP: delete the pod Apr 10 14:32:12.237: INFO: Waiting for pod pod-c4615e5f-17fb-43cd-90e9-0b30a15dcfc7 to disappear Apr 10 14:32:12.239: INFO: Pod pod-c4615e5f-17fb-43cd-90e9-0b30a15dcfc7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:32:12.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1018" for this suite. Apr 10 14:32:18.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:32:18.331: INFO: namespace emptydir-1018 deletion completed in 6.088339791s • [SLOW TEST:10.213 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:32:18.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-3062 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3062 to expose endpoints map[] Apr 10 14:32:18.435: INFO: Get endpoints failed (30.801095ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 10 14:32:19.438: INFO: successfully validated that service multi-endpoint-test in namespace services-3062 exposes endpoints map[] (1.034036653s elapsed) STEP: Creating pod pod1 in namespace services-3062 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3062 to expose endpoints map[pod1:[100]] Apr 10 14:32:22.494: INFO: successfully validated that service multi-endpoint-test in namespace services-3062 exposes endpoints map[pod1:[100]] (3.049237493s elapsed) STEP: Creating pod pod2 in namespace services-3062 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3062 to expose endpoints map[pod1:[100] pod2:[101]] Apr 10 14:32:25.615: INFO: successfully validated that service multi-endpoint-test in namespace services-3062 exposes endpoints map[pod1:[100] pod2:[101]] (3.116991118s elapsed) STEP: Deleting pod pod1 in namespace services-3062 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3062 to expose endpoints map[pod2:[101]] Apr 10 14:32:26.678: INFO: successfully validated that service multi-endpoint-test in namespace services-3062 exposes endpoints map[pod2:[101]] (1.058080709s elapsed) STEP: Deleting pod pod2 in namespace services-3062 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3062 to expose endpoints map[] Apr 10 14:32:27.693: INFO: successfully validated that service multi-endpoint-test in namespace services-3062 exposes endpoints map[] (1.010057707s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:32:27.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3062" for this suite. Apr 10 14:32:33.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:32:33.826: INFO: namespace services-3062 deletion completed in 6.082352125s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:15.495 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:32:33.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-bf745df7-e554-474d-8f5f-95dfa120bcb8 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:32:33.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-539" for this suite. Apr 10 14:32:39.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:32:40.008: INFO: namespace secrets-539 deletion completed in 6.084100212s • [SLOW TEST:6.181 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:32:40.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 10 14:32:40.063: INFO: Waiting up to 5m0s for pod "pod-467de412-d67c-485d-951a-f17584a828ac" in namespace "emptydir-9117" to be "success or failure" Apr 10 14:32:40.073: INFO: Pod "pod-467de412-d67c-485d-951a-f17584a828ac": Phase="Pending", Reason="", readiness=false. Elapsed: 9.86185ms Apr 10 14:32:42.077: INFO: Pod "pod-467de412-d67c-485d-951a-f17584a828ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01404475s Apr 10 14:32:44.082: INFO: Pod "pod-467de412-d67c-485d-951a-f17584a828ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01851164s STEP: Saw pod success Apr 10 14:32:44.082: INFO: Pod "pod-467de412-d67c-485d-951a-f17584a828ac" satisfied condition "success or failure" Apr 10 14:32:44.085: INFO: Trying to get logs from node iruya-worker pod pod-467de412-d67c-485d-951a-f17584a828ac container test-container: STEP: delete the pod Apr 10 14:32:44.105: INFO: Waiting for pod pod-467de412-d67c-485d-951a-f17584a828ac to disappear Apr 10 14:32:44.109: INFO: Pod pod-467de412-d67c-485d-951a-f17584a828ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:32:44.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9117" for this suite. Apr 10 14:32:50.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:32:50.207: INFO: namespace emptydir-9117 deletion completed in 6.094948964s • [SLOW TEST:10.198 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:32:50.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 10 14:32:50.261: INFO: Waiting up to 5m0s for pod "pod-fc5b7ddf-0ebb-4f00-b2e6-1633a99aa1a1" in namespace "emptydir-6647" to be "success or failure" Apr 10 14:32:50.265: INFO: Pod "pod-fc5b7ddf-0ebb-4f00-b2e6-1633a99aa1a1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.531308ms Apr 10 14:32:52.268: INFO: Pod "pod-fc5b7ddf-0ebb-4f00-b2e6-1633a99aa1a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006746882s Apr 10 14:32:54.272: INFO: Pod "pod-fc5b7ddf-0ebb-4f00-b2e6-1633a99aa1a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01075388s STEP: Saw pod success Apr 10 14:32:54.272: INFO: Pod "pod-fc5b7ddf-0ebb-4f00-b2e6-1633a99aa1a1" satisfied condition "success or failure" Apr 10 14:32:54.275: INFO: Trying to get logs from node iruya-worker2 pod pod-fc5b7ddf-0ebb-4f00-b2e6-1633a99aa1a1 container test-container: STEP: delete the pod Apr 10 14:32:54.291: INFO: Waiting for pod pod-fc5b7ddf-0ebb-4f00-b2e6-1633a99aa1a1 to disappear Apr 10 14:32:54.295: INFO: Pod pod-fc5b7ddf-0ebb-4f00-b2e6-1633a99aa1a1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:32:54.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6647" for this suite. Apr 10 14:33:00.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:33:00.390: INFO: namespace emptydir-6647 deletion completed in 6.091986715s • [SLOW TEST:10.184 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:33:00.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 14:33:00.502: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 10 14:33:00.511: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:00.533: INFO: Number of nodes with available pods: 0 Apr 10 14:33:00.533: INFO: Node iruya-worker is running more than one daemon pod Apr 10 14:33:01.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:01.542: INFO: Number of nodes with available pods: 0 Apr 10 14:33:01.542: INFO: Node iruya-worker is running more than one daemon pod Apr 10 14:33:02.582: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:02.585: INFO: Number of nodes with available pods: 0 Apr 10 14:33:02.585: INFO: Node iruya-worker is running more than one daemon pod Apr 10 14:33:03.537: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:03.541: INFO: Number of nodes with available pods: 0 Apr 10 14:33:03.541: INFO: Node iruya-worker is running more than one daemon pod Apr 10 14:33:04.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:04.542: INFO: Number of nodes with available pods: 2 Apr 10 14:33:04.542: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 10 14:33:04.583: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:04.583: INFO: Wrong image for pod: daemon-set-5l9t9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:04.595: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:05.613: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:05.613: INFO: Wrong image for pod: daemon-set-5l9t9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:05.616: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:06.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:06.600: INFO: Wrong image for pod: daemon-set-5l9t9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:06.604: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:07.601: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:07.601: INFO: Wrong image for pod: daemon-set-5l9t9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:07.605: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:08.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:08.600: INFO: Wrong image for pod: daemon-set-5l9t9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:08.600: INFO: Pod daemon-set-5l9t9 is not available Apr 10 14:33:08.605: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:09.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:09.600: INFO: Wrong image for pod: daemon-set-5l9t9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:09.600: INFO: Pod daemon-set-5l9t9 is not available Apr 10 14:33:09.606: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:10.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:10.600: INFO: Wrong image for pod: daemon-set-5l9t9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:10.600: INFO: Pod daemon-set-5l9t9 is not available Apr 10 14:33:10.604: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:11.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:11.600: INFO: Wrong image for pod: daemon-set-5l9t9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:11.600: INFO: Pod daemon-set-5l9t9 is not available Apr 10 14:33:11.605: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:12.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:12.600: INFO: Pod daemon-set-82swv is not available Apr 10 14:33:12.605: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:13.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:13.600: INFO: Pod daemon-set-82swv is not available Apr 10 14:33:13.605: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:14.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:14.600: INFO: Pod daemon-set-82swv is not available Apr 10 14:33:14.604: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:15.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:15.604: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:16.601: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:16.605: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:17.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:17.600: INFO: Pod daemon-set-2vg82 is not available Apr 10 14:33:17.604: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:18.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:18.600: INFO: Pod daemon-set-2vg82 is not available Apr 10 14:33:18.605: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:19.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:19.600: INFO: Pod daemon-set-2vg82 is not available Apr 10 14:33:19.604: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:20.599: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:20.599: INFO: Pod daemon-set-2vg82 is not available Apr 10 14:33:20.603: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:21.600: INFO: Wrong image for pod: daemon-set-2vg82. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 10 14:33:21.600: INFO: Pod daemon-set-2vg82 is not available Apr 10 14:33:21.604: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:22.600: INFO: Pod daemon-set-4vn7s is not available Apr 10 14:33:22.604: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 10 14:33:22.607: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:22.610: INFO: Number of nodes with available pods: 1 Apr 10 14:33:22.610: INFO: Node iruya-worker is running more than one daemon pod Apr 10 14:33:23.615: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:23.618: INFO: Number of nodes with available pods: 1 Apr 10 14:33:23.618: INFO: Node iruya-worker is running more than one daemon pod Apr 10 14:33:24.614: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:24.635: INFO: Number of nodes with available pods: 1 Apr 10 14:33:24.635: INFO: Node iruya-worker is running more than one daemon pod Apr 10 14:33:25.615: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 14:33:25.618: INFO: Number of nodes with available pods: 2 Apr 10 14:33:25.618: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5684, will wait for the garbage collector to delete the pods Apr 10 14:33:25.721: INFO: Deleting DaemonSet.extensions daemon-set took: 36.201334ms Apr 10 14:33:26.021: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.335895ms Apr 10 14:33:32.234: INFO: Number of nodes with available pods: 0 Apr 10 14:33:32.234: INFO: Number of running nodes: 0, number of available pods: 0 Apr 10 14:33:32.237: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5684/daemonsets","resourceVersion":"4680135"},"items":null} Apr 10 14:33:32.239: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5684/pods","resourceVersion":"4680135"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:33:32.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5684" for this suite. Apr 10 14:33:38.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:33:38.344: INFO: namespace daemonsets-5684 deletion completed in 6.093311971s • [SLOW TEST:37.953 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:33:38.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 14:33:38.388: INFO: Creating deployment "test-recreate-deployment" Apr 10 14:33:38.398: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 10 14:33:38.450: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 10 14:33:40.457: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 10 14:33:40.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722126018, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722126018, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722126018, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722126018, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 14:33:42.463: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 10 14:33:42.471: INFO: Updating deployment test-recreate-deployment Apr 10 14:33:42.471: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 10 14:33:42.699: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4018,SelfLink:/apis/apps/v1/namespaces/deployment-4018/deployments/test-recreate-deployment,UID:482520a9-a253-4e4a-ac35-42f79c6c4519,ResourceVersion:4680229,Generation:2,CreationTimestamp:2020-04-10 14:33:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-10 14:33:42 +0000 UTC 2020-04-10 14:33:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-10 14:33:42 +0000 UTC 2020-04-10 14:33:38 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 10 14:33:42.816: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4018,SelfLink:/apis/apps/v1/namespaces/deployment-4018/replicasets/test-recreate-deployment-5c8c9cc69d,UID:8a7be983-3f41-446e-b4ca-8867f5ca2e31,ResourceVersion:4680227,Generation:1,CreationTimestamp:2020-04-10 14:33:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 482520a9-a253-4e4a-ac35-42f79c6c4519 0xc00280f917 0xc00280f918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 10 14:33:42.816: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 10 14:33:42.816: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4018,SelfLink:/apis/apps/v1/namespaces/deployment-4018/replicasets/test-recreate-deployment-6df85df6b9,UID:dc1955ad-3c9b-4610-a9b4-5e58e0a0305a,ResourceVersion:4680218,Generation:2,CreationTimestamp:2020-04-10 14:33:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 482520a9-a253-4e4a-ac35-42f79c6c4519 0xc00280f9e7 0xc00280f9e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 10 14:33:42.820: INFO: Pod "test-recreate-deployment-5c8c9cc69d-vrsxq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-vrsxq,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4018,SelfLink:/api/v1/namespaces/deployment-4018/pods/test-recreate-deployment-5c8c9cc69d-vrsxq,UID:13908a2b-9e0b-4111-99d7-1cdb60fc531f,ResourceVersion:4680230,Generation:0,CreationTimestamp:2020-04-10 14:33:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 8a7be983-3f41-446e-b4ca-8867f5ca2e31 0xc0029c4297 0xc0029c4298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hlss {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hlss,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2hlss true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c4310} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c4330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:33:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:33:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:33:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 14:33:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-10 14:33:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:33:42.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4018" for this suite. Apr 10 14:33:48.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:33:48.929: INFO: namespace deployment-4018 deletion completed in 6.105762138s • [SLOW TEST:10.585 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:33:48.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 10 14:33:48.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2110' Apr 10 14:33:49.338: INFO: stderr: "" Apr 10 14:33:49.338: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 14:33:49.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2110' Apr 10 14:33:49.497: INFO: stderr: "" Apr 10 14:33:49.497: INFO: stdout: "update-demo-nautilus-ngnf6 update-demo-nautilus-qwmnq " Apr 10 14:33:49.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ngnf6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:33:49.594: INFO: stderr: "" Apr 10 14:33:49.594: INFO: stdout: "" Apr 10 14:33:49.594: INFO: update-demo-nautilus-ngnf6 is created but not running Apr 10 14:33:54.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2110' Apr 10 14:33:54.693: INFO: stderr: "" Apr 10 14:33:54.693: INFO: stdout: "update-demo-nautilus-ngnf6 update-demo-nautilus-qwmnq " Apr 10 14:33:54.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ngnf6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:33:54.789: INFO: stderr: "" Apr 10 14:33:54.789: INFO: stdout: "true" Apr 10 14:33:54.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ngnf6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:33:54.872: INFO: stderr: "" Apr 10 14:33:54.872: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 14:33:54.872: INFO: validating pod update-demo-nautilus-ngnf6 Apr 10 14:33:54.876: INFO: got data: { "image": "nautilus.jpg" } Apr 10 14:33:54.876: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 14:33:54.876: INFO: update-demo-nautilus-ngnf6 is verified up and running Apr 10 14:33:54.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qwmnq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:33:54.966: INFO: stderr: "" Apr 10 14:33:54.966: INFO: stdout: "true" Apr 10 14:33:54.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qwmnq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:33:55.061: INFO: stderr: "" Apr 10 14:33:55.061: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 14:33:55.061: INFO: validating pod update-demo-nautilus-qwmnq Apr 10 14:33:55.065: INFO: got data: { "image": "nautilus.jpg" } Apr 10 14:33:55.065: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 14:33:55.065: INFO: update-demo-nautilus-qwmnq is verified up and running STEP: scaling down the replication controller Apr 10 14:33:55.068: INFO: scanned /root for discovery docs: Apr 10 14:33:55.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2110' Apr 10 14:33:56.183: INFO: stderr: "" Apr 10 14:33:56.183: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 14:33:56.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2110' Apr 10 14:33:56.292: INFO: stderr: "" Apr 10 14:33:56.292: INFO: stdout: "update-demo-nautilus-ngnf6 update-demo-nautilus-qwmnq " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 10 14:34:01.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2110' Apr 10 14:34:01.386: INFO: stderr: "" Apr 10 14:34:01.386: INFO: stdout: "update-demo-nautilus-ngnf6 update-demo-nautilus-qwmnq " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 10 14:34:06.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2110' Apr 10 14:34:06.480: INFO: stderr: "" Apr 10 14:34:06.480: INFO: stdout: "update-demo-nautilus-ngnf6 " Apr 10 14:34:06.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ngnf6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:34:06.572: INFO: stderr: "" Apr 10 14:34:06.572: INFO: stdout: "true" Apr 10 14:34:06.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ngnf6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:34:06.667: INFO: stderr: "" Apr 10 14:34:06.667: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 14:34:06.668: INFO: validating pod update-demo-nautilus-ngnf6 Apr 10 14:34:06.671: INFO: got data: { "image": "nautilus.jpg" } Apr 10 14:34:06.671: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 14:34:06.671: INFO: update-demo-nautilus-ngnf6 is verified up and running STEP: scaling up the replication controller Apr 10 14:34:06.674: INFO: scanned /root for discovery docs: Apr 10 14:34:06.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2110' Apr 10 14:34:07.808: INFO: stderr: "" Apr 10 14:34:07.808: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 14:34:07.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2110' Apr 10 14:34:07.940: INFO: stderr: "" Apr 10 14:34:07.940: INFO: stdout: "update-demo-nautilus-fg9d2 update-demo-nautilus-ngnf6 " Apr 10 14:34:07.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fg9d2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:34:08.034: INFO: stderr: "" Apr 10 14:34:08.034: INFO: stdout: "" Apr 10 14:34:08.034: INFO: update-demo-nautilus-fg9d2 is created but not running Apr 10 14:34:13.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2110' Apr 10 14:34:13.139: INFO: stderr: "" Apr 10 14:34:13.139: INFO: stdout: "update-demo-nautilus-fg9d2 update-demo-nautilus-ngnf6 " Apr 10 14:34:13.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fg9d2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:34:13.226: INFO: stderr: "" Apr 10 14:34:13.226: INFO: stdout: "true" Apr 10 14:34:13.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fg9d2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:34:13.336: INFO: stderr: "" Apr 10 14:34:13.336: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 14:34:13.336: INFO: validating pod update-demo-nautilus-fg9d2 Apr 10 14:34:13.340: INFO: got data: { "image": "nautilus.jpg" } Apr 10 14:34:13.340: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 14:34:13.340: INFO: update-demo-nautilus-fg9d2 is verified up and running Apr 10 14:34:13.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ngnf6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:34:13.433: INFO: stderr: "" Apr 10 14:34:13.433: INFO: stdout: "true" Apr 10 14:34:13.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ngnf6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2110' Apr 10 14:34:13.524: INFO: stderr: "" Apr 10 14:34:13.524: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 14:34:13.524: INFO: validating pod update-demo-nautilus-ngnf6 Apr 10 14:34:13.527: INFO: got data: { "image": "nautilus.jpg" } Apr 10 14:34:13.527: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 14:34:13.527: INFO: update-demo-nautilus-ngnf6 is verified up and running STEP: using delete to clean up resources Apr 10 14:34:13.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2110' Apr 10 14:34:13.619: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 14:34:13.619: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 10 14:34:13.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2110' Apr 10 14:34:13.720: INFO: stderr: "No resources found.\n" Apr 10 14:34:13.720: INFO: stdout: "" Apr 10 14:34:13.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2110 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 14:34:13.815: INFO: stderr: "" Apr 10 14:34:13.815: INFO: stdout: "update-demo-nautilus-fg9d2\nupdate-demo-nautilus-ngnf6\n" Apr 10 14:34:14.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2110' Apr 10 14:34:14.412: INFO: stderr: "No resources found.\n" Apr 10 14:34:14.413: INFO: stdout: "" Apr 10 14:34:14.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2110 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 14:34:14.504: INFO: stderr: "" Apr 10 14:34:14.504: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:34:14.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2110" for this suite. Apr 10 14:34:20.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:34:20.664: INFO: namespace kubectl-2110 deletion completed in 6.155489944s • [SLOW TEST:31.733 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:34:20.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 10 14:34:20.739: INFO: Waiting up to 5m0s for pod "pod-5b55e7f4-f416-4b19-bb59-1ea776ba8ac9" in namespace "emptydir-6970" to be "success or failure" Apr 10 14:34:20.741: INFO: Pod "pod-5b55e7f4-f416-4b19-bb59-1ea776ba8ac9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.605487ms Apr 10 14:34:22.746: INFO: Pod "pod-5b55e7f4-f416-4b19-bb59-1ea776ba8ac9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006781127s Apr 10 14:34:24.750: INFO: Pod "pod-5b55e7f4-f416-4b19-bb59-1ea776ba8ac9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011270404s STEP: Saw pod success Apr 10 14:34:24.750: INFO: Pod "pod-5b55e7f4-f416-4b19-bb59-1ea776ba8ac9" satisfied condition "success or failure" Apr 10 14:34:24.753: INFO: Trying to get logs from node iruya-worker2 pod pod-5b55e7f4-f416-4b19-bb59-1ea776ba8ac9 container test-container: STEP: delete the pod Apr 10 14:34:24.811: INFO: Waiting for pod pod-5b55e7f4-f416-4b19-bb59-1ea776ba8ac9 to disappear Apr 10 14:34:24.818: INFO: Pod pod-5b55e7f4-f416-4b19-bb59-1ea776ba8ac9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:34:24.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6970" for this suite. Apr 10 14:34:30.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:34:30.930: INFO: namespace emptydir-6970 deletion completed in 6.108428033s • [SLOW TEST:10.265 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:34:30.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-918375cd-f981-476b-8c7b-b552d1b6e038 STEP: Creating a pod to test consume secrets Apr 10 14:34:30.991: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f2702d95-23f8-4173-892c-1329475c0dd4" in namespace "projected-248" to be "success or failure" Apr 10 14:34:31.004: INFO: Pod "pod-projected-secrets-f2702d95-23f8-4173-892c-1329475c0dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.540643ms Apr 10 14:34:33.008: INFO: Pod "pod-projected-secrets-f2702d95-23f8-4173-892c-1329475c0dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016737506s Apr 10 14:34:35.012: INFO: Pod "pod-projected-secrets-f2702d95-23f8-4173-892c-1329475c0dd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020765176s STEP: Saw pod success Apr 10 14:34:35.012: INFO: Pod "pod-projected-secrets-f2702d95-23f8-4173-892c-1329475c0dd4" satisfied condition "success or failure" Apr 10 14:34:35.015: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-f2702d95-23f8-4173-892c-1329475c0dd4 container secret-volume-test: STEP: delete the pod Apr 10 14:34:35.050: INFO: Waiting for pod pod-projected-secrets-f2702d95-23f8-4173-892c-1329475c0dd4 to disappear Apr 10 14:34:35.064: INFO: Pod pod-projected-secrets-f2702d95-23f8-4173-892c-1329475c0dd4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:34:35.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-248" for this suite. Apr 10 14:34:41.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:34:41.162: INFO: namespace projected-248 deletion completed in 6.094823618s • [SLOW TEST:10.231 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:34:41.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2878.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2878.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2878.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2878.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2878.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2878.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2878.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2878.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2878.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2878.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 20.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.20_udp@PTR;check="$$(dig +tcp +noall +answer +search 20.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.20_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2878.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2878.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2878.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2878.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2878.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2878.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2878.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2878.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2878.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2878.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2878.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 20.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.20_udp@PTR;check="$$(dig +tcp +noall +answer +search 20.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.20_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 14:34:47.399: INFO: Unable to read wheezy_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:47.403: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:47.405: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:47.407: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:47.424: INFO: Unable to read jessie_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:47.427: INFO: Unable to read jessie_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:47.430: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:47.432: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:47.451: INFO: Lookups using dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b failed for: [wheezy_udp@dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_udp@dns-test-service.dns-2878.svc.cluster.local jessie_tcp@dns-test-service.dns-2878.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local] Apr 10 14:34:52.460: INFO: Unable to read wheezy_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:52.463: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:52.466: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:52.469: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:52.511: INFO: Unable to read jessie_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:52.513: INFO: Unable to read jessie_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:52.516: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:52.519: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:52.536: INFO: Lookups using dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b failed for: [wheezy_udp@dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_udp@dns-test-service.dns-2878.svc.cluster.local jessie_tcp@dns-test-service.dns-2878.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local] Apr 10 14:34:57.455: INFO: Unable to read wheezy_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:57.458: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:57.461: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:57.463: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:57.481: INFO: Unable to read jessie_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:57.484: INFO: Unable to read jessie_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:57.487: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:57.489: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:34:57.504: INFO: Lookups using dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b failed for: [wheezy_udp@dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_udp@dns-test-service.dns-2878.svc.cluster.local jessie_tcp@dns-test-service.dns-2878.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local] Apr 10 14:35:02.456: INFO: Unable to read wheezy_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:02.460: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:02.463: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:02.465: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:02.510: INFO: Unable to read jessie_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:02.513: INFO: Unable to read jessie_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:02.515: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:02.518: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:02.537: INFO: Lookups using dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b failed for: [wheezy_udp@dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_udp@dns-test-service.dns-2878.svc.cluster.local jessie_tcp@dns-test-service.dns-2878.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local] Apr 10 14:35:07.456: INFO: Unable to read wheezy_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:07.459: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:07.463: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:07.466: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:07.485: INFO: Unable to read jessie_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:07.488: INFO: Unable to read jessie_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:07.491: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:07.494: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:07.511: INFO: Lookups using dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b failed for: [wheezy_udp@dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_udp@dns-test-service.dns-2878.svc.cluster.local jessie_tcp@dns-test-service.dns-2878.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local] Apr 10 14:35:12.456: INFO: Unable to read wheezy_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:12.460: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:12.463: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:12.466: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:12.512: INFO: Unable to read jessie_udp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:12.516: INFO: Unable to read jessie_tcp@dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:12.518: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:12.525: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local from pod dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b: the server could not find the requested resource (get pods dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b) Apr 10 14:35:12.544: INFO: Lookups using dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b failed for: [wheezy_udp@dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@dns-test-service.dns-2878.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_udp@dns-test-service.dns-2878.svc.cluster.local jessie_tcp@dns-test-service.dns-2878.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2878.svc.cluster.local] Apr 10 14:35:17.510: INFO: DNS probes using dns-2878/dns-test-d14bfb51-7cb6-4949-8f5d-6bda0317ca4b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:35:17.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2878" for this suite. Apr 10 14:35:23.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:35:24.042: INFO: namespace dns-2878 deletion completed in 6.084327318s • [SLOW TEST:42.880 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:35:24.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-2dd6724d-9518-4ed5-8e53-f8e791c5f727 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:35:30.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-56" for this suite. Apr 10 14:35:52.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:35:52.270: INFO: namespace configmap-56 deletion completed in 22.091845783s • [SLOW TEST:28.227 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:35:52.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:35:57.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1369" for this suite. Apr 10 14:36:03.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:36:04.015: INFO: namespace watch-1369 deletion completed in 6.187716315s • [SLOW TEST:11.745 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:36:04.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 10 14:36:12.140: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 10 14:36:12.150: INFO: Pod pod-with-prestop-http-hook still exists Apr 10 14:36:14.150: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 10 14:36:14.154: INFO: Pod pod-with-prestop-http-hook still exists Apr 10 14:36:16.150: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 10 14:36:16.155: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:36:16.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6285" for this suite. Apr 10 14:36:38.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:36:38.264: INFO: namespace container-lifecycle-hook-6285 deletion completed in 22.096530464s • [SLOW TEST:34.249 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:36:38.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 10 14:36:38.316: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 10 14:36:43.321: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:36:44.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3518" for this suite. Apr 10 14:36:50.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:36:50.469: INFO: namespace replication-controller-3518 deletion completed in 6.126527846s • [SLOW TEST:12.205 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:36:50.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 10 14:36:50.572: INFO: Waiting up to 5m0s for pod "pod-b2e61ddf-462e-4a2e-99bc-6c01e361585b" in namespace "emptydir-4422" to be "success or failure" Apr 10 14:36:50.576: INFO: Pod "pod-b2e61ddf-462e-4a2e-99bc-6c01e361585b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.957478ms Apr 10 14:36:52.580: INFO: Pod "pod-b2e61ddf-462e-4a2e-99bc-6c01e361585b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007727946s Apr 10 14:36:54.584: INFO: Pod "pod-b2e61ddf-462e-4a2e-99bc-6c01e361585b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011992986s STEP: Saw pod success Apr 10 14:36:54.584: INFO: Pod "pod-b2e61ddf-462e-4a2e-99bc-6c01e361585b" satisfied condition "success or failure" Apr 10 14:36:54.592: INFO: Trying to get logs from node iruya-worker pod pod-b2e61ddf-462e-4a2e-99bc-6c01e361585b container test-container: STEP: delete the pod Apr 10 14:36:54.608: INFO: Waiting for pod pod-b2e61ddf-462e-4a2e-99bc-6c01e361585b to disappear Apr 10 14:36:54.611: INFO: Pod pod-b2e61ddf-462e-4a2e-99bc-6c01e361585b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:36:54.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4422" for this suite. Apr 10 14:37:00.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:37:00.733: INFO: namespace emptydir-4422 deletion completed in 6.119383285s • [SLOW TEST:10.263 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:37:00.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-2svj STEP: Creating a pod to test atomic-volume-subpath Apr 10 14:37:00.793: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2svj" in namespace "subpath-3905" to be "success or failure" Apr 10 14:37:00.812: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Pending", Reason="", readiness=false. Elapsed: 18.478339ms Apr 10 14:37:02.816: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02238611s Apr 10 14:37:04.819: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Running", Reason="", readiness=true. Elapsed: 4.026174785s Apr 10 14:37:06.830: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Running", Reason="", readiness=true. Elapsed: 6.036696475s Apr 10 14:37:08.834: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Running", Reason="", readiness=true. Elapsed: 8.041051122s Apr 10 14:37:10.838: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Running", Reason="", readiness=true. Elapsed: 10.044939593s Apr 10 14:37:12.843: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Running", Reason="", readiness=true. Elapsed: 12.049241223s Apr 10 14:37:14.847: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Running", Reason="", readiness=true. Elapsed: 14.053240693s Apr 10 14:37:16.851: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Running", Reason="", readiness=true. Elapsed: 16.057538033s Apr 10 14:37:18.855: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Running", Reason="", readiness=true. Elapsed: 18.061831559s Apr 10 14:37:20.859: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Running", Reason="", readiness=true. Elapsed: 20.065682731s Apr 10 14:37:22.863: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Running", Reason="", readiness=true. Elapsed: 22.069651107s Apr 10 14:37:24.867: INFO: Pod "pod-subpath-test-secret-2svj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.073970091s STEP: Saw pod success Apr 10 14:37:24.867: INFO: Pod "pod-subpath-test-secret-2svj" satisfied condition "success or failure" Apr 10 14:37:24.871: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-2svj container test-container-subpath-secret-2svj: STEP: delete the pod Apr 10 14:37:24.894: INFO: Waiting for pod pod-subpath-test-secret-2svj to disappear Apr 10 14:37:24.898: INFO: Pod pod-subpath-test-secret-2svj no longer exists STEP: Deleting pod pod-subpath-test-secret-2svj Apr 10 14:37:24.898: INFO: Deleting pod "pod-subpath-test-secret-2svj" in namespace "subpath-3905" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:37:24.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3905" for this suite. Apr 10 14:37:30.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:37:31.021: INFO: namespace subpath-3905 deletion completed in 6.11772511s • [SLOW TEST:30.287 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:37:31.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-525c11f7-bd05-44c6-bee4-710b66ad3225 STEP: Creating a pod to test consume secrets Apr 10 14:37:31.092: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d98cb784-612f-4cb8-afd2-6e14b1e12e15" in namespace "projected-5752" to be "success or failure" Apr 10 14:37:31.142: INFO: Pod "pod-projected-secrets-d98cb784-612f-4cb8-afd2-6e14b1e12e15": Phase="Pending", Reason="", readiness=false. Elapsed: 49.841304ms Apr 10 14:37:33.147: INFO: Pod "pod-projected-secrets-d98cb784-612f-4cb8-afd2-6e14b1e12e15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05504152s Apr 10 14:37:35.152: INFO: Pod "pod-projected-secrets-d98cb784-612f-4cb8-afd2-6e14b1e12e15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059422988s STEP: Saw pod success Apr 10 14:37:35.152: INFO: Pod "pod-projected-secrets-d98cb784-612f-4cb8-afd2-6e14b1e12e15" satisfied condition "success or failure" Apr 10 14:37:35.155: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-d98cb784-612f-4cb8-afd2-6e14b1e12e15 container projected-secret-volume-test: STEP: delete the pod Apr 10 14:37:35.198: INFO: Waiting for pod pod-projected-secrets-d98cb784-612f-4cb8-afd2-6e14b1e12e15 to disappear Apr 10 14:37:35.211: INFO: Pod pod-projected-secrets-d98cb784-612f-4cb8-afd2-6e14b1e12e15 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:37:35.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5752" for this suite. Apr 10 14:37:41.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:37:41.308: INFO: namespace projected-5752 deletion completed in 6.093230244s • [SLOW TEST:10.287 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:37:41.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:37:45.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1528" for this suite. Apr 10 14:38:23.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:38:23.528: INFO: namespace kubelet-test-1528 deletion completed in 38.102450003s • [SLOW TEST:42.220 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:38:23.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5309 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 10 14:38:23.602: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 10 14:38:47.722: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.226 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5309 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 14:38:47.722: INFO: >>> kubeConfig: /root/.kube/config I0410 14:38:47.760628 6 log.go:172] (0xc0016f0210) (0xc001d9c640) Create stream I0410 14:38:47.760664 6 log.go:172] (0xc0016f0210) (0xc001d9c640) Stream added, broadcasting: 1 I0410 14:38:47.762755 6 log.go:172] (0xc0016f0210) Reply frame received for 1 I0410 14:38:47.762808 6 log.go:172] (0xc0016f0210) (0xc002b9d040) Create stream I0410 14:38:47.762824 6 log.go:172] (0xc0016f0210) (0xc002b9d040) Stream added, broadcasting: 3 I0410 14:38:47.763819 6 log.go:172] (0xc0016f0210) Reply frame received for 3 I0410 14:38:47.763860 6 log.go:172] (0xc0016f0210) (0xc002b9d0e0) Create stream I0410 14:38:47.763876 6 log.go:172] (0xc0016f0210) (0xc002b9d0e0) Stream added, broadcasting: 5 I0410 14:38:47.764729 6 log.go:172] (0xc0016f0210) Reply frame received for 5 I0410 14:38:48.829772 6 log.go:172] (0xc0016f0210) Data frame received for 3 I0410 14:38:48.829817 6 log.go:172] (0xc002b9d040) (3) Data frame handling I0410 14:38:48.829840 6 log.go:172] (0xc002b9d040) (3) Data frame sent I0410 14:38:48.829861 6 log.go:172] (0xc0016f0210) Data frame received for 3 I0410 14:38:48.829880 6 log.go:172] (0xc002b9d040) (3) Data frame handling I0410 14:38:48.830107 6 log.go:172] (0xc0016f0210) Data frame received for 5 I0410 14:38:48.830136 6 log.go:172] (0xc002b9d0e0) (5) Data frame handling I0410 14:38:48.832456 6 log.go:172] (0xc0016f0210) Data frame received for 1 I0410 14:38:48.832479 6 log.go:172] (0xc001d9c640) (1) Data frame handling I0410 14:38:48.832501 6 log.go:172] (0xc001d9c640) (1) Data frame sent I0410 14:38:48.832522 6 log.go:172] (0xc0016f0210) (0xc001d9c640) Stream removed, broadcasting: 1 I0410 14:38:48.832645 6 log.go:172] (0xc0016f0210) (0xc001d9c640) Stream removed, broadcasting: 1 I0410 14:38:48.832674 6 log.go:172] (0xc0016f0210) Go away received I0410 14:38:48.832749 6 log.go:172] (0xc0016f0210) (0xc002b9d040) Stream removed, broadcasting: 3 I0410 14:38:48.832835 6 log.go:172] (0xc0016f0210) (0xc002b9d0e0) Stream removed, broadcasting: 5 Apr 10 14:38:48.832: INFO: Found all expected endpoints: [netserver-0] Apr 10 14:38:48.836: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.13 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5309 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 14:38:48.836: INFO: >>> kubeConfig: /root/.kube/config I0410 14:38:48.871089 6 log.go:172] (0xc000e9a9a0) (0xc002b9d360) Create stream I0410 14:38:48.871137 6 log.go:172] (0xc000e9a9a0) (0xc002b9d360) Stream added, broadcasting: 1 I0410 14:38:48.874068 6 log.go:172] (0xc000e9a9a0) Reply frame received for 1 I0410 14:38:48.874132 6 log.go:172] (0xc000e9a9a0) (0xc001d9c6e0) Create stream I0410 14:38:48.874165 6 log.go:172] (0xc000e9a9a0) (0xc001d9c6e0) Stream added, broadcasting: 3 I0410 14:38:48.875898 6 log.go:172] (0xc000e9a9a0) Reply frame received for 3 I0410 14:38:48.875937 6 log.go:172] (0xc000e9a9a0) (0xc001d9c960) Create stream I0410 14:38:48.875955 6 log.go:172] (0xc000e9a9a0) (0xc001d9c960) Stream added, broadcasting: 5 I0410 14:38:48.876729 6 log.go:172] (0xc000e9a9a0) Reply frame received for 5 I0410 14:38:49.966840 6 log.go:172] (0xc000e9a9a0) Data frame received for 3 I0410 14:38:49.966898 6 log.go:172] (0xc001d9c6e0) (3) Data frame handling I0410 14:38:49.966936 6 log.go:172] (0xc001d9c6e0) (3) Data frame sent I0410 14:38:49.966959 6 log.go:172] (0xc000e9a9a0) Data frame received for 3 I0410 14:38:49.966979 6 log.go:172] (0xc001d9c6e0) (3) Data frame handling I0410 14:38:49.967074 6 log.go:172] (0xc000e9a9a0) Data frame received for 5 I0410 14:38:49.967096 6 log.go:172] (0xc001d9c960) (5) Data frame handling I0410 14:38:49.969459 6 log.go:172] (0xc000e9a9a0) Data frame received for 1 I0410 14:38:49.969504 6 log.go:172] (0xc002b9d360) (1) Data frame handling I0410 14:38:49.969548 6 log.go:172] (0xc002b9d360) (1) Data frame sent I0410 14:38:49.969579 6 log.go:172] (0xc000e9a9a0) (0xc002b9d360) Stream removed, broadcasting: 1 I0410 14:38:49.969619 6 log.go:172] (0xc000e9a9a0) Go away received I0410 14:38:49.969739 6 log.go:172] (0xc000e9a9a0) (0xc002b9d360) Stream removed, broadcasting: 1 I0410 14:38:49.969769 6 log.go:172] (0xc000e9a9a0) (0xc001d9c6e0) Stream removed, broadcasting: 3 I0410 14:38:49.969792 6 log.go:172] (0xc000e9a9a0) (0xc001d9c960) Stream removed, broadcasting: 5 Apr 10 14:38:49.969: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:38:49.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5309" for this suite. Apr 10 14:39:12.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:39:12.079: INFO: namespace pod-network-test-5309 deletion completed in 22.104400942s • [SLOW TEST:48.550 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:39:12.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 10 14:39:12.148: INFO: Waiting up to 5m0s for pod "pod-d2d206d6-c8d6-42d2-aef1-553a16b50655" in namespace "emptydir-5309" to be "success or failure" Apr 10 14:39:12.161: INFO: Pod "pod-d2d206d6-c8d6-42d2-aef1-553a16b50655": Phase="Pending", Reason="", readiness=false. Elapsed: 13.689916ms Apr 10 14:39:14.165: INFO: Pod "pod-d2d206d6-c8d6-42d2-aef1-553a16b50655": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017224953s Apr 10 14:39:16.169: INFO: Pod "pod-d2d206d6-c8d6-42d2-aef1-553a16b50655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021588279s STEP: Saw pod success Apr 10 14:39:16.169: INFO: Pod "pod-d2d206d6-c8d6-42d2-aef1-553a16b50655" satisfied condition "success or failure" Apr 10 14:39:16.172: INFO: Trying to get logs from node iruya-worker2 pod pod-d2d206d6-c8d6-42d2-aef1-553a16b50655 container test-container: STEP: delete the pod Apr 10 14:39:16.189: INFO: Waiting for pod pod-d2d206d6-c8d6-42d2-aef1-553a16b50655 to disappear Apr 10 14:39:16.193: INFO: Pod pod-d2d206d6-c8d6-42d2-aef1-553a16b50655 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:39:16.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5309" for this suite. Apr 10 14:39:22.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:39:22.296: INFO: namespace emptydir-5309 deletion completed in 6.10004153s • [SLOW TEST:10.217 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:39:22.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0410 14:39:23.407987 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 14:39:23.408: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:39:23.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3096" for this suite. Apr 10 14:39:29.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:39:29.601: INFO: namespace gc-3096 deletion completed in 6.189200781s • [SLOW TEST:7.304 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:39:29.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 14:39:29.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9794' Apr 10 14:39:32.789: INFO: stderr: "" Apr 10 14:39:32.789: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 10 14:39:32.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9794' Apr 10 14:39:33.074: INFO: stderr: "" Apr 10 14:39:33.074: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 10 14:39:34.079: INFO: Selector matched 1 pods for map[app:redis] Apr 10 14:39:34.079: INFO: Found 0 / 1 Apr 10 14:39:35.078: INFO: Selector matched 1 pods for map[app:redis] Apr 10 14:39:35.078: INFO: Found 0 / 1 Apr 10 14:39:36.078: INFO: Selector matched 1 pods for map[app:redis] Apr 10 14:39:36.078: INFO: Found 1 / 1 Apr 10 14:39:36.078: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 10 14:39:36.081: INFO: Selector matched 1 pods for map[app:redis] Apr 10 14:39:36.081: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 10 14:39:36.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-n45jn --namespace=kubectl-9794' Apr 10 14:39:36.198: INFO: stderr: "" Apr 10 14:39:36.198: INFO: stdout: "Name: redis-master-n45jn\nNamespace: kubectl-9794\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Fri, 10 Apr 2020 14:39:32 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.228\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://a9a0ea4e77f876521c9f81368425b7182de812efae3fe8ab33bfc4ae80eb0b43\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 10 Apr 2020 14:39:35 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-qpf9z (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-qpf9z:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-qpf9z\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-9794/redis-master-n45jn to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Apr 10 14:39:36.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9794' Apr 10 14:39:36.336: INFO: stderr: "" Apr 10 14:39:36.336: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9794\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-n45jn\n" Apr 10 14:39:36.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9794' Apr 10 14:39:36.441: INFO: stderr: "" Apr 10 14:39:36.441: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9794\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.104.84.254\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.228:6379\nSession Affinity: None\nEvents: \n" Apr 10 14:39:36.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 10 14:39:36.566: INFO: stderr: "" Apr 10 14:39:36.566: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 10 Apr 2020 14:38:49 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 10 Apr 2020 14:38:49 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 10 Apr 2020 14:38:49 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 10 Apr 2020 14:38:49 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 25d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 10 14:39:36.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9794' Apr 10 14:39:36.735: INFO: stderr: "" Apr 10 14:39:36.735: INFO: stdout: "Name: kubectl-9794\nLabels: e2e-framework=kubectl\n e2e-run=2a63e938-9bd4-4a0c-926d-2e1d446ffcd6\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:39:36.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9794" for this suite. Apr 10 14:39:58.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:39:58.863: INFO: namespace kubectl-9794 deletion completed in 22.125094734s • [SLOW TEST:29.262 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 10 14:39:58.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 10 14:39:58.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 10 14:39:59.103: INFO: stderr: "" Apr 10 14:39:59.103: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:39:42Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 10 14:39:59.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9747" for this suite. Apr 10 14:40:05.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 10 14:40:05.201: INFO: namespace kubectl-9747 deletion completed in 6.094087983s • [SLOW TEST:6.337 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSApr 10 14:40:05.202: INFO: Running AfterSuite actions on all nodes Apr 10 14:40:05.202: INFO: Running AfterSuite actions on node 1 Apr 10 14:40:05.202: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6265.311 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS