I0515 12:55:54.380995 6 e2e.go:243] Starting e2e run "108568fc-bc5b-4a6d-b7a4-b391846335c1" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589547353 - Will randomize all specs Will run 215 of 4412 specs May 15 12:55:54.553: INFO: >>> kubeConfig: /root/.kube/config May 15 12:55:54.557: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 15 12:55:54.577: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 15 12:55:54.610: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 15 12:55:54.610: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 15 12:55:54.610: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 15 12:55:54.619: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 15 12:55:54.620: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 15 12:55:54.620: INFO: e2e test version: v1.15.11 May 15 12:55:54.620: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:55:54.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services May 15 12:55:54.772: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-7257 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7257 to expose endpoints map[] May 15 12:55:54.822: INFO: Get endpoints failed (23.734078ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 15 12:55:55.825: INFO: successfully validated that service endpoint-test2 in namespace services-7257 exposes endpoints map[] (1.026921957s elapsed) STEP: Creating pod pod1 in namespace services-7257 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7257 to expose endpoints map[pod1:[80]] May 15 12:55:59.897: INFO: successfully validated that service endpoint-test2 in namespace services-7257 exposes endpoints map[pod1:[80]] (4.066128467s elapsed) STEP: Creating pod pod2 in namespace services-7257 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7257 to expose endpoints map[pod1:[80] pod2:[80]] May 15 12:56:04.062: INFO: successfully validated that service endpoint-test2 in namespace services-7257 exposes endpoints map[pod1:[80] pod2:[80]] (4.161431958s elapsed) STEP: Deleting pod pod1 in namespace services-7257 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7257 to expose endpoints map[pod2:[80]] May 15 12:56:05.135: INFO: successfully validated that service endpoint-test2 in namespace services-7257 exposes endpoints map[pod2:[80]] (1.069014251s elapsed) STEP: Deleting pod pod2 in namespace services-7257 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7257 to expose endpoints map[] May 15 12:56:06.163: INFO: successfully validated that service endpoint-test2 in namespace services-7257 exposes endpoints map[] (1.024132215s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:56:06.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7257" for this suite. May 15 12:56:28.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:56:28.372: INFO: namespace services-7257 deletion completed in 22.141827811s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:33.752 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:56:28.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 15 12:56:28.424: INFO: Waiting up to 5m0s for pod "pod-795dd2a7-529c-4638-a67f-07262a39c54d" in namespace "emptydir-3126" to be "success or failure" May 15 12:56:28.440: INFO: Pod "pod-795dd2a7-529c-4638-a67f-07262a39c54d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.369122ms May 15 12:56:30.444: INFO: Pod "pod-795dd2a7-529c-4638-a67f-07262a39c54d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019903509s May 15 12:56:32.448: INFO: Pod "pod-795dd2a7-529c-4638-a67f-07262a39c54d": Phase="Running", Reason="", readiness=true. Elapsed: 4.023695733s May 15 12:56:34.452: INFO: Pod "pod-795dd2a7-529c-4638-a67f-07262a39c54d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027316981s STEP: Saw pod success May 15 12:56:34.452: INFO: Pod "pod-795dd2a7-529c-4638-a67f-07262a39c54d" satisfied condition "success or failure" May 15 12:56:34.454: INFO: Trying to get logs from node iruya-worker2 pod pod-795dd2a7-529c-4638-a67f-07262a39c54d container test-container: STEP: delete the pod May 15 12:56:34.470: INFO: Waiting for pod pod-795dd2a7-529c-4638-a67f-07262a39c54d to disappear May 15 12:56:34.475: INFO: Pod pod-795dd2a7-529c-4638-a67f-07262a39c54d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:56:34.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3126" for this suite. May 15 12:56:40.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:56:40.577: INFO: namespace emptydir-3126 deletion completed in 6.100123671s • [SLOW TEST:12.204 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:56:40.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 12:56:40.664: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.55067ms) May 15 12:56:40.668: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.143739ms) May 15 12:56:40.670: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.620916ms) May 15 12:56:40.673: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.524764ms) May 15 12:56:40.676: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.058355ms) May 15 12:56:40.679: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.035535ms) May 15 12:56:40.692: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 13.153832ms) May 15 12:56:40.696: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.049699ms) May 15 12:56:40.700: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.274978ms) May 15 12:56:40.703: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.815981ms) May 15 12:56:40.707: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.912257ms) May 15 12:56:40.711: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.693983ms) May 15 12:56:40.715: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.780933ms) May 15 12:56:40.718: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.35305ms) May 15 12:56:40.722: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.441916ms) May 15 12:56:40.725: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.530837ms) May 15 12:56:40.729: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.328568ms) May 15 12:56:40.732: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.971678ms) May 15 12:56:40.735: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.989839ms) May 15 12:56:40.738: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.954162ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:56:40.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8649" for this suite. May 15 12:56:46.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:56:46.843: INFO: namespace proxy-8649 deletion completed in 6.101747567s • [SLOW TEST:6.265 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:56:46.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 15 12:56:51.456: INFO: Successfully updated pod "labelsupdateb1d4bc41-5c01-4eba-9476-7b7d674932b7" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:56:55.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5621" for this suite. May 15 12:57:17.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:57:17.596: INFO: namespace projected-5621 deletion completed in 22.077230165s • [SLOW TEST:30.753 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:57:17.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-986 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-986 STEP: Deleting pre-stop pod May 15 12:57:30.729: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:57:30.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-986" for this suite. May 15 12:58:12.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:58:12.830: INFO: namespace prestop-986 deletion completed in 42.088839475s • [SLOW TEST:55.234 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:58:12.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bdbd21ea-d113-4e15-886c-3c6456d6bcef STEP: Creating a pod to test consume secrets May 15 12:58:13.164: INFO: Waiting up to 5m0s for pod "pod-secrets-a6b60668-a0f4-4614-92ca-b854c16c24aa" in namespace "secrets-2182" to be "success or failure" May 15 12:58:13.167: INFO: Pod "pod-secrets-a6b60668-a0f4-4614-92ca-b854c16c24aa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.282941ms May 15 12:58:15.171: INFO: Pod "pod-secrets-a6b60668-a0f4-4614-92ca-b854c16c24aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007450354s May 15 12:58:17.176: INFO: Pod "pod-secrets-a6b60668-a0f4-4614-92ca-b854c16c24aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012263816s STEP: Saw pod success May 15 12:58:17.176: INFO: Pod "pod-secrets-a6b60668-a0f4-4614-92ca-b854c16c24aa" satisfied condition "success or failure" May 15 12:58:17.180: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-a6b60668-a0f4-4614-92ca-b854c16c24aa container secret-volume-test: STEP: delete the pod May 15 12:58:17.253: INFO: Waiting for pod pod-secrets-a6b60668-a0f4-4614-92ca-b854c16c24aa to disappear May 15 12:58:17.293: INFO: Pod pod-secrets-a6b60668-a0f4-4614-92ca-b854c16c24aa no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:58:17.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2182" for this suite. May 15 12:58:23.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:58:23.376: INFO: namespace secrets-2182 deletion completed in 6.078862789s • [SLOW TEST:10.545 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:58:23.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 12:58:23.485: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 15 12:58:25.580: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:58:26.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7894" for this suite. May 15 12:58:32.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:58:32.828: INFO: namespace replication-controller-7894 deletion completed in 6.225700749s • [SLOW TEST:9.452 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:58:32.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6669/configmap-test-8a7d0969-652b-4cf0-a0f4-7090a141808a STEP: Creating a pod to test consume configMaps May 15 12:58:33.123: INFO: Waiting up to 5m0s for pod "pod-configmaps-c71cac50-0326-43d3-961b-882a802d7061" in namespace "configmap-6669" to be "success or failure" May 15 12:58:33.361: INFO: Pod "pod-configmaps-c71cac50-0326-43d3-961b-882a802d7061": Phase="Pending", Reason="", readiness=false. Elapsed: 238.509376ms May 15 12:58:35.364: INFO: Pod "pod-configmaps-c71cac50-0326-43d3-961b-882a802d7061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241271162s May 15 12:58:37.368: INFO: Pod "pod-configmaps-c71cac50-0326-43d3-961b-882a802d7061": Phase="Running", Reason="", readiness=true. Elapsed: 4.245584033s May 15 12:58:39.372: INFO: Pod "pod-configmaps-c71cac50-0326-43d3-961b-882a802d7061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249748908s STEP: Saw pod success May 15 12:58:39.372: INFO: Pod "pod-configmaps-c71cac50-0326-43d3-961b-882a802d7061" satisfied condition "success or failure" May 15 12:58:39.375: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c71cac50-0326-43d3-961b-882a802d7061 container env-test: STEP: delete the pod May 15 12:58:39.418: INFO: Waiting for pod pod-configmaps-c71cac50-0326-43d3-961b-882a802d7061 to disappear May 15 12:58:39.449: INFO: Pod pod-configmaps-c71cac50-0326-43d3-961b-882a802d7061 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:58:39.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6669" for this suite. May 15 12:58:45.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:58:45.564: INFO: namespace configmap-6669 deletion completed in 6.112179258s • [SLOW TEST:12.735 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:58:45.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-fb227b0c-c8a3-4b05-bf5f-26911ed9061e STEP: Creating a pod to test consume secrets May 15 12:58:45.643: INFO: Waiting up to 5m0s for pod "pod-secrets-3ffe03d3-a705-430f-b133-956a69d6a779" in namespace "secrets-2887" to be "success or failure" May 15 12:58:45.647: INFO: Pod "pod-secrets-3ffe03d3-a705-430f-b133-956a69d6a779": Phase="Pending", Reason="", readiness=false. Elapsed: 3.266443ms May 15 12:58:47.779: INFO: Pod "pod-secrets-3ffe03d3-a705-430f-b133-956a69d6a779": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135109482s May 15 12:58:49.782: INFO: Pod "pod-secrets-3ffe03d3-a705-430f-b133-956a69d6a779": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138899727s STEP: Saw pod success May 15 12:58:49.783: INFO: Pod "pod-secrets-3ffe03d3-a705-430f-b133-956a69d6a779" satisfied condition "success or failure" May 15 12:58:49.785: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-3ffe03d3-a705-430f-b133-956a69d6a779 container secret-volume-test: STEP: delete the pod May 15 12:58:49.818: INFO: Waiting for pod pod-secrets-3ffe03d3-a705-430f-b133-956a69d6a779 to disappear May 15 12:58:49.825: INFO: Pod pod-secrets-3ffe03d3-a705-430f-b133-956a69d6a779 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:58:49.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2887" for this suite. May 15 12:58:55.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:58:55.910: INFO: namespace secrets-2887 deletion completed in 6.083787591s • [SLOW TEST:10.346 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:58:55.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-6ddb1115-8ed1-4e4a-a8ac-e512309f4cf8 STEP: Creating a pod to test consume configMaps May 15 12:58:55.982: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ba093c9-c806-4984-a9d8-06e74a69a394" in namespace "configmap-4000" to be "success or failure" May 15 12:58:56.006: INFO: Pod "pod-configmaps-7ba093c9-c806-4984-a9d8-06e74a69a394": Phase="Pending", Reason="", readiness=false. Elapsed: 23.172771ms May 15 12:58:58.010: INFO: Pod "pod-configmaps-7ba093c9-c806-4984-a9d8-06e74a69a394": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027658299s May 15 12:59:00.017: INFO: Pod "pod-configmaps-7ba093c9-c806-4984-a9d8-06e74a69a394": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034993058s STEP: Saw pod success May 15 12:59:00.017: INFO: Pod "pod-configmaps-7ba093c9-c806-4984-a9d8-06e74a69a394" satisfied condition "success or failure" May 15 12:59:00.097: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7ba093c9-c806-4984-a9d8-06e74a69a394 container configmap-volume-test: STEP: delete the pod May 15 12:59:00.121: INFO: Waiting for pod pod-configmaps-7ba093c9-c806-4984-a9d8-06e74a69a394 to disappear May 15 12:59:00.124: INFO: Pod pod-configmaps-7ba093c9-c806-4984-a9d8-06e74a69a394 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:59:00.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4000" for this suite. May 15 12:59:06.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:59:06.226: INFO: namespace configmap-4000 deletion completed in 6.099085072s • [SLOW TEST:10.315 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:59:06.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:59:06.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4119" for this suite. May 15 12:59:28.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:59:28.489: INFO: namespace pods-4119 deletion completed in 22.106324728s • [SLOW TEST:22.263 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:59:28.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3eb1624d-b2ad-4758-a690-e110e603fcf2 STEP: Creating a pod to test consume secrets May 15 12:59:28.581: INFO: Waiting up to 5m0s for pod "pod-secrets-041f706d-9023-44f8-a45f-686814738ed3" in namespace "secrets-2191" to be "success or failure" May 15 12:59:28.588: INFO: Pod "pod-secrets-041f706d-9023-44f8-a45f-686814738ed3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.716351ms May 15 12:59:30.756: INFO: Pod "pod-secrets-041f706d-9023-44f8-a45f-686814738ed3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174927515s May 15 12:59:32.760: INFO: Pod "pod-secrets-041f706d-9023-44f8-a45f-686814738ed3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178413861s STEP: Saw pod success May 15 12:59:32.760: INFO: Pod "pod-secrets-041f706d-9023-44f8-a45f-686814738ed3" satisfied condition "success or failure" May 15 12:59:32.809: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-041f706d-9023-44f8-a45f-686814738ed3 container secret-volume-test: STEP: delete the pod May 15 12:59:32.830: INFO: Waiting for pod pod-secrets-041f706d-9023-44f8-a45f-686814738ed3 to disappear May 15 12:59:32.834: INFO: Pod pod-secrets-041f706d-9023-44f8-a45f-686814738ed3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:59:32.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2191" for this suite. May 15 12:59:38.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 12:59:38.907: INFO: namespace secrets-2191 deletion completed in 6.070143105s • [SLOW TEST:10.417 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 12:59:38.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 12:59:44.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7261" for this suite. May 15 13:00:06.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:00:06.150: INFO: namespace replication-controller-7261 deletion completed in 22.090256926s • [SLOW TEST:27.243 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:00:06.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 15 13:00:06.249: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:00:11.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2351" for this suite. May 15 13:00:17.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:00:17.712: INFO: namespace init-container-2351 deletion completed in 6.098619544s • [SLOW TEST:11.561 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:00:17.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-3111 I0515 13:00:17.794245 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3111, replica count: 1 I0515 13:00:18.844657 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 13:00:19.844878 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 13:00:20.845088 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 13:00:21.845521 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 13:00:21.988: INFO: Created: latency-svc-wldfk May 15 13:00:22.000: INFO: Got endpoints: latency-svc-wldfk [55.172065ms] May 15 13:00:22.036: INFO: Created: latency-svc-pcwcs May 15 13:00:22.121: INFO: Got endpoints: latency-svc-pcwcs [120.581614ms] May 15 13:00:22.124: INFO: Created: latency-svc-9qxbw May 15 13:00:22.186: INFO: Got endpoints: latency-svc-9qxbw [185.184502ms] May 15 13:00:22.279: INFO: Created: latency-svc-pd6cs May 15 13:00:22.306: INFO: Got endpoints: latency-svc-pd6cs [305.748914ms] May 15 13:00:22.307: INFO: Created: latency-svc-c66n6 May 15 13:00:22.321: INFO: Got endpoints: latency-svc-c66n6 [320.287474ms] May 15 13:00:22.342: INFO: Created: latency-svc-g5gjc May 15 13:00:22.351: INFO: Got endpoints: latency-svc-g5gjc [350.208684ms] May 15 13:00:22.409: INFO: Created: latency-svc-g48tz May 15 13:00:22.412: INFO: Got endpoints: latency-svc-g48tz [411.554929ms] May 15 13:00:22.486: INFO: Created: latency-svc-s5mhj May 15 13:00:22.499: INFO: Got endpoints: latency-svc-s5mhj [498.590676ms] May 15 13:00:22.559: INFO: Created: latency-svc-cr7hg May 15 13:00:22.566: INFO: Got endpoints: latency-svc-cr7hg [565.638055ms] May 15 13:00:22.600: INFO: Created: latency-svc-nq9xf May 15 13:00:22.633: INFO: Got endpoints: latency-svc-nq9xf [632.28174ms] May 15 13:00:22.702: INFO: Created: latency-svc-snmlg May 15 13:00:22.709: INFO: Got endpoints: latency-svc-snmlg [708.367996ms] May 15 13:00:22.773: INFO: Created: latency-svc-s955b May 15 13:00:22.846: INFO: Got endpoints: latency-svc-s955b [845.013717ms] May 15 13:00:22.864: INFO: Created: latency-svc-2xnbg May 15 13:00:22.878: INFO: Got endpoints: latency-svc-2xnbg [877.22834ms] May 15 13:00:22.901: INFO: Created: latency-svc-4kwc9 May 15 13:00:22.915: INFO: Got endpoints: latency-svc-4kwc9 [913.970827ms] May 15 13:00:22.942: INFO: Created: latency-svc-9stxs May 15 13:00:23.055: INFO: Got endpoints: latency-svc-9stxs [1.054679826s] May 15 13:00:23.061: INFO: Created: latency-svc-2vbkg May 15 13:00:23.067: INFO: Got endpoints: latency-svc-2vbkg [1.065924985s] May 15 13:00:23.098: INFO: Created: latency-svc-lhmhm May 15 13:00:23.116: INFO: Got endpoints: latency-svc-lhmhm [994.549956ms] May 15 13:00:23.134: INFO: Created: latency-svc-fqvvn May 15 13:00:23.152: INFO: Got endpoints: latency-svc-fqvvn [965.451859ms] May 15 13:00:23.219: INFO: Created: latency-svc-szftm May 15 13:00:23.242: INFO: Got endpoints: latency-svc-szftm [935.311496ms] May 15 13:00:23.271: INFO: Created: latency-svc-sqn4m May 15 13:00:23.290: INFO: Got endpoints: latency-svc-sqn4m [968.647776ms] May 15 13:00:23.314: INFO: Created: latency-svc-fdkgx May 15 13:00:23.373: INFO: Got endpoints: latency-svc-fdkgx [1.021939108s] May 15 13:00:23.376: INFO: Created: latency-svc-snwzm May 15 13:00:23.380: INFO: Got endpoints: latency-svc-snwzm [967.280748ms] May 15 13:00:23.404: INFO: Created: latency-svc-4n75p May 15 13:00:23.421: INFO: Got endpoints: latency-svc-4n75p [921.05523ms] May 15 13:00:23.459: INFO: Created: latency-svc-ks9qb May 15 13:00:23.468: INFO: Got endpoints: latency-svc-ks9qb [901.917ms] May 15 13:00:23.536: INFO: Created: latency-svc-6q6vr May 15 13:00:23.553: INFO: Got endpoints: latency-svc-6q6vr [920.162367ms] May 15 13:00:23.584: INFO: Created: latency-svc-h4mpq May 15 13:00:23.601: INFO: Got endpoints: latency-svc-h4mpq [892.173165ms] May 15 13:00:23.668: INFO: Created: latency-svc-ckzwm May 15 13:00:23.670: INFO: Got endpoints: latency-svc-ckzwm [823.875908ms] May 15 13:00:23.697: INFO: Created: latency-svc-xx8fq May 15 13:00:23.710: INFO: Got endpoints: latency-svc-xx8fq [831.699812ms] May 15 13:00:23.728: INFO: Created: latency-svc-wh6wh May 15 13:00:23.740: INFO: Got endpoints: latency-svc-wh6wh [825.625746ms] May 15 13:00:23.764: INFO: Created: latency-svc-b5bc6 May 15 13:00:23.804: INFO: Got endpoints: latency-svc-b5bc6 [748.506935ms] May 15 13:00:23.818: INFO: Created: latency-svc-zmd5s May 15 13:00:23.832: INFO: Got endpoints: latency-svc-zmd5s [765.021771ms] May 15 13:00:23.860: INFO: Created: latency-svc-lpn8x May 15 13:00:23.873: INFO: Got endpoints: latency-svc-lpn8x [757.319588ms] May 15 13:00:23.895: INFO: Created: latency-svc-tgpx9 May 15 13:00:23.935: INFO: Got endpoints: latency-svc-tgpx9 [783.76482ms] May 15 13:00:23.944: INFO: Created: latency-svc-prdl5 May 15 13:00:23.958: INFO: Got endpoints: latency-svc-prdl5 [716.544658ms] May 15 13:00:24.004: INFO: Created: latency-svc-qkl47 May 15 13:00:24.012: INFO: Got endpoints: latency-svc-qkl47 [722.533106ms] May 15 13:00:24.034: INFO: Created: latency-svc-2psgd May 15 13:00:24.074: INFO: Got endpoints: latency-svc-2psgd [700.630295ms] May 15 13:00:24.106: INFO: Created: latency-svc-w4hpd May 15 13:00:24.128: INFO: Got endpoints: latency-svc-w4hpd [748.015297ms] May 15 13:00:24.272: INFO: Created: latency-svc-68g6m May 15 13:00:24.279: INFO: Got endpoints: latency-svc-68g6m [858.074573ms] May 15 13:00:24.304: INFO: Created: latency-svc-6kj7s May 15 13:00:24.320: INFO: Got endpoints: latency-svc-6kj7s [851.300458ms] May 15 13:00:24.346: INFO: Created: latency-svc-mgthw May 15 13:00:24.356: INFO: Got endpoints: latency-svc-mgthw [803.058611ms] May 15 13:00:24.421: INFO: Created: latency-svc-pj86b May 15 13:00:24.428: INFO: Got endpoints: latency-svc-pj86b [826.852415ms] May 15 13:00:24.457: INFO: Created: latency-svc-9gfqg May 15 13:00:24.471: INFO: Got endpoints: latency-svc-9gfqg [801.119351ms] May 15 13:00:24.490: INFO: Created: latency-svc-jg2qj May 15 13:00:24.507: INFO: Got endpoints: latency-svc-jg2qj [797.178486ms] May 15 13:00:24.559: INFO: Created: latency-svc-9vhqw May 15 13:00:24.562: INFO: Got endpoints: latency-svc-9vhqw [821.948982ms] May 15 13:00:24.591: INFO: Created: latency-svc-g87cz May 15 13:00:24.610: INFO: Got endpoints: latency-svc-g87cz [805.936074ms] May 15 13:00:24.634: INFO: Created: latency-svc-4b7xc May 15 13:00:24.646: INFO: Got endpoints: latency-svc-4b7xc [814.030856ms] May 15 13:00:24.712: INFO: Created: latency-svc-2dvmf May 15 13:00:24.736: INFO: Got endpoints: latency-svc-2dvmf [863.363062ms] May 15 13:00:24.780: INFO: Created: latency-svc-ljf9b May 15 13:00:24.855: INFO: Got endpoints: latency-svc-ljf9b [919.548995ms] May 15 13:00:24.996: INFO: Created: latency-svc-kt74k May 15 13:00:25.013: INFO: Got endpoints: latency-svc-kt74k [1.054756411s] May 15 13:00:25.176: INFO: Created: latency-svc-z9jxw May 15 13:00:25.218: INFO: Got endpoints: latency-svc-z9jxw [1.205821054s] May 15 13:00:25.219: INFO: Created: latency-svc-xw24g May 15 13:00:25.235: INFO: Got endpoints: latency-svc-xw24g [1.161392408s] May 15 13:00:25.367: INFO: Created: latency-svc-7snlq May 15 13:00:25.371: INFO: Got endpoints: latency-svc-7snlq [1.243154871s] May 15 13:00:25.462: INFO: Created: latency-svc-9f6rp May 15 13:00:25.504: INFO: Got endpoints: latency-svc-9f6rp [1.225228144s] May 15 13:00:25.528: INFO: Created: latency-svc-rf6mg May 15 13:00:25.546: INFO: Got endpoints: latency-svc-rf6mg [1.226003654s] May 15 13:00:25.570: INFO: Created: latency-svc-slsts May 15 13:00:25.582: INFO: Got endpoints: latency-svc-slsts [1.225654598s] May 15 13:00:25.600: INFO: Created: latency-svc-4d8qk May 15 13:00:25.654: INFO: Got endpoints: latency-svc-4d8qk [1.225803951s] May 15 13:00:25.658: INFO: Created: latency-svc-x94gz May 15 13:00:25.666: INFO: Got endpoints: latency-svc-x94gz [1.195172867s] May 15 13:00:25.690: INFO: Created: latency-svc-2l4cl May 15 13:00:25.703: INFO: Got endpoints: latency-svc-2l4cl [1.195504076s] May 15 13:00:25.732: INFO: Created: latency-svc-nmrr8 May 15 13:00:25.747: INFO: Got endpoints: latency-svc-nmrr8 [1.184676781s] May 15 13:00:25.794: INFO: Created: latency-svc-t22xr May 15 13:00:25.796: INFO: Got endpoints: latency-svc-t22xr [1.186026044s] May 15 13:00:25.852: INFO: Created: latency-svc-6qn82 May 15 13:00:25.866: INFO: Got endpoints: latency-svc-6qn82 [1.22006267s] May 15 13:00:25.888: INFO: Created: latency-svc-dxbbf May 15 13:00:25.960: INFO: Got endpoints: latency-svc-dxbbf [1.223214424s] May 15 13:00:25.963: INFO: Created: latency-svc-s9bnt May 15 13:00:25.968: INFO: Got endpoints: latency-svc-s9bnt [1.113074517s] May 15 13:00:26.026: INFO: Created: latency-svc-wgspp May 15 13:00:26.040: INFO: Got endpoints: latency-svc-wgspp [1.02701607s] May 15 13:00:26.091: INFO: Created: latency-svc-95rwd May 15 13:00:26.094: INFO: Got endpoints: latency-svc-95rwd [875.942988ms] May 15 13:00:26.128: INFO: Created: latency-svc-b76vm May 15 13:00:26.143: INFO: Got endpoints: latency-svc-b76vm [907.529375ms] May 15 13:00:26.164: INFO: Created: latency-svc-kvq5p May 15 13:00:26.191: INFO: Got endpoints: latency-svc-kvq5p [820.099515ms] May 15 13:00:26.248: INFO: Created: latency-svc-7zp6x May 15 13:00:26.276: INFO: Got endpoints: latency-svc-7zp6x [771.615581ms] May 15 13:00:26.302: INFO: Created: latency-svc-r9n7n May 15 13:00:26.311: INFO: Got endpoints: latency-svc-r9n7n [765.321346ms] May 15 13:00:26.397: INFO: Created: latency-svc-fhwjh May 15 13:00:26.401: INFO: Got endpoints: latency-svc-fhwjh [818.480803ms] May 15 13:00:26.445: INFO: Created: latency-svc-96pxl May 15 13:00:26.462: INFO: Got endpoints: latency-svc-96pxl [807.75762ms] May 15 13:00:26.481: INFO: Created: latency-svc-9lzqp May 15 13:00:26.528: INFO: Got endpoints: latency-svc-9lzqp [862.051971ms] May 15 13:00:26.536: INFO: Created: latency-svc-s65fg May 15 13:00:26.553: INFO: Got endpoints: latency-svc-s65fg [850.267051ms] May 15 13:00:26.571: INFO: Created: latency-svc-rkv7n May 15 13:00:26.589: INFO: Got endpoints: latency-svc-rkv7n [842.342151ms] May 15 13:00:26.607: INFO: Created: latency-svc-zwr92 May 15 13:00:26.619: INFO: Got endpoints: latency-svc-zwr92 [822.568155ms] May 15 13:00:26.661: INFO: Created: latency-svc-vtm6k May 15 13:00:26.667: INFO: Got endpoints: latency-svc-vtm6k [800.755564ms] May 15 13:00:26.704: INFO: Created: latency-svc-g8nss May 15 13:00:26.709: INFO: Got endpoints: latency-svc-g8nss [749.49349ms] May 15 13:00:26.739: INFO: Created: latency-svc-gqwp9 May 15 13:00:26.786: INFO: Got endpoints: latency-svc-gqwp9 [818.076696ms] May 15 13:00:26.801: INFO: Created: latency-svc-zlqzz May 15 13:00:26.824: INFO: Got endpoints: latency-svc-zlqzz [783.918376ms] May 15 13:00:26.847: INFO: Created: latency-svc-dx25c May 15 13:00:26.860: INFO: Got endpoints: latency-svc-dx25c [73.982945ms] May 15 13:00:26.884: INFO: Created: latency-svc-5rrdf May 15 13:00:26.923: INFO: Got endpoints: latency-svc-5rrdf [828.969434ms] May 15 13:00:26.937: INFO: Created: latency-svc-xdt8r May 15 13:00:26.957: INFO: Got endpoints: latency-svc-xdt8r [814.727014ms] May 15 13:00:26.979: INFO: Created: latency-svc-55gkg May 15 13:00:26.993: INFO: Got endpoints: latency-svc-55gkg [801.797397ms] May 15 13:00:27.016: INFO: Created: latency-svc-qcdp7 May 15 13:00:27.055: INFO: Got endpoints: latency-svc-qcdp7 [779.24832ms] May 15 13:00:27.070: INFO: Created: latency-svc-gbq2f May 15 13:00:27.084: INFO: Got endpoints: latency-svc-gbq2f [772.5408ms] May 15 13:00:27.112: INFO: Created: latency-svc-hpgbr May 15 13:00:27.126: INFO: Got endpoints: latency-svc-hpgbr [725.441598ms] May 15 13:00:27.149: INFO: Created: latency-svc-7jwbh May 15 13:00:27.187: INFO: Got endpoints: latency-svc-7jwbh [724.489151ms] May 15 13:00:27.208: INFO: Created: latency-svc-f5tlw May 15 13:00:27.235: INFO: Got endpoints: latency-svc-f5tlw [706.187859ms] May 15 13:00:27.276: INFO: Created: latency-svc-4zgkk May 15 13:00:27.312: INFO: Got endpoints: latency-svc-4zgkk [759.395044ms] May 15 13:00:27.328: INFO: Created: latency-svc-b8dnz May 15 13:00:27.343: INFO: Got endpoints: latency-svc-b8dnz [753.633034ms] May 15 13:00:27.372: INFO: Created: latency-svc-cdqtz May 15 13:00:27.379: INFO: Got endpoints: latency-svc-cdqtz [760.59989ms] May 15 13:00:27.400: INFO: Created: latency-svc-n84gf May 15 13:00:27.445: INFO: Got endpoints: latency-svc-n84gf [778.181432ms] May 15 13:00:27.453: INFO: Created: latency-svc-ghgls May 15 13:00:27.482: INFO: Got endpoints: latency-svc-ghgls [772.661229ms] May 15 13:00:27.508: INFO: Created: latency-svc-k9jb5 May 15 13:00:27.524: INFO: Got endpoints: latency-svc-k9jb5 [699.989916ms] May 15 13:00:27.580: INFO: Created: latency-svc-zn76f May 15 13:00:27.580: INFO: Got endpoints: latency-svc-zn76f [719.996156ms] May 15 13:00:27.628: INFO: Created: latency-svc-8w6db May 15 13:00:27.657: INFO: Got endpoints: latency-svc-8w6db [734.06465ms] May 15 13:00:27.703: INFO: Created: latency-svc-2jpv9 May 15 13:00:27.743: INFO: Got endpoints: latency-svc-2jpv9 [785.167343ms] May 15 13:00:27.772: INFO: Created: latency-svc-j7htk May 15 13:00:27.783: INFO: Got endpoints: latency-svc-j7htk [790.34118ms] May 15 13:00:27.802: INFO: Created: latency-svc-wjs5t May 15 13:00:27.834: INFO: Got endpoints: latency-svc-wjs5t [778.891896ms] May 15 13:00:27.849: INFO: Created: latency-svc-dntzt May 15 13:00:27.862: INFO: Got endpoints: latency-svc-dntzt [777.884691ms] May 15 13:00:27.879: INFO: Created: latency-svc-nvr8s May 15 13:00:27.892: INFO: Got endpoints: latency-svc-nvr8s [765.906365ms] May 15 13:00:27.909: INFO: Created: latency-svc-rmbdn May 15 13:00:27.922: INFO: Got endpoints: latency-svc-rmbdn [735.6318ms] May 15 13:00:27.978: INFO: Created: latency-svc-qj6l6 May 15 13:00:27.981: INFO: Got endpoints: latency-svc-qj6l6 [746.332399ms] May 15 13:00:28.006: INFO: Created: latency-svc-nc4lc May 15 13:00:28.025: INFO: Got endpoints: latency-svc-nc4lc [712.475673ms] May 15 13:00:28.071: INFO: Created: latency-svc-4wk27 May 15 13:00:28.109: INFO: Got endpoints: latency-svc-4wk27 [765.852629ms] May 15 13:00:28.131: INFO: Created: latency-svc-q4kwl May 15 13:00:28.157: INFO: Got endpoints: latency-svc-q4kwl [778.202806ms] May 15 13:00:28.180: INFO: Created: latency-svc-kv95g May 15 13:00:28.194: INFO: Got endpoints: latency-svc-kv95g [748.636908ms] May 15 13:00:28.254: INFO: Created: latency-svc-xlvx5 May 15 13:00:28.266: INFO: Got endpoints: latency-svc-xlvx5 [783.93542ms] May 15 13:00:28.306: INFO: Created: latency-svc-94g9t May 15 13:00:28.326: INFO: Got endpoints: latency-svc-94g9t [802.059314ms] May 15 13:00:28.391: INFO: Created: latency-svc-shcf2 May 15 13:00:28.418: INFO: Got endpoints: latency-svc-shcf2 [837.494826ms] May 15 13:00:28.444: INFO: Created: latency-svc-snbwm May 15 13:00:28.471: INFO: Got endpoints: latency-svc-snbwm [813.68491ms] May 15 13:00:28.607: INFO: Created: latency-svc-4mqxb May 15 13:00:28.668: INFO: Got endpoints: latency-svc-4mqxb [925.329887ms] May 15 13:00:28.763: INFO: Created: latency-svc-n7mwj May 15 13:00:28.811: INFO: Got endpoints: latency-svc-n7mwj [1.027538834s] May 15 13:00:28.859: INFO: Created: latency-svc-rlg6h May 15 13:00:28.910: INFO: Got endpoints: latency-svc-rlg6h [1.07544604s] May 15 13:00:28.943: INFO: Created: latency-svc-5lkpz May 15 13:00:28.958: INFO: Got endpoints: latency-svc-5lkpz [1.095941795s] May 15 13:00:28.984: INFO: Created: latency-svc-nw6fk May 15 13:00:29.073: INFO: Got endpoints: latency-svc-nw6fk [1.180992817s] May 15 13:00:29.092: INFO: Created: latency-svc-hxl67 May 15 13:00:29.126: INFO: Got endpoints: latency-svc-hxl67 [1.203682991s] May 15 13:00:29.214: INFO: Created: latency-svc-82zgv May 15 13:00:29.215: INFO: Got endpoints: latency-svc-82zgv [1.233970437s] May 15 13:00:29.273: INFO: Created: latency-svc-dknks May 15 13:00:29.300: INFO: Got endpoints: latency-svc-dknks [1.275091228s] May 15 13:00:29.379: INFO: Created: latency-svc-9kc6x May 15 13:00:29.423: INFO: Got endpoints: latency-svc-9kc6x [1.313396564s] May 15 13:00:29.423: INFO: Created: latency-svc-2x52j May 15 13:00:29.471: INFO: Got endpoints: latency-svc-2x52j [1.313079045s] May 15 13:00:29.565: INFO: Created: latency-svc-fbm45 May 15 13:00:29.615: INFO: Got endpoints: latency-svc-fbm45 [1.420763189s] May 15 13:00:29.715: INFO: Created: latency-svc-rzf69 May 15 13:00:29.752: INFO: Got endpoints: latency-svc-rzf69 [1.486422919s] May 15 13:00:29.753: INFO: Created: latency-svc-g9fh2 May 15 13:00:29.782: INFO: Got endpoints: latency-svc-g9fh2 [1.456056448s] May 15 13:00:29.813: INFO: Created: latency-svc-nvr9d May 15 13:00:29.852: INFO: Got endpoints: latency-svc-nvr9d [1.434206465s] May 15 13:00:29.867: INFO: Created: latency-svc-dt7rf May 15 13:00:29.896: INFO: Got endpoints: latency-svc-dt7rf [1.424636057s] May 15 13:00:29.915: INFO: Created: latency-svc-bm4cn May 15 13:00:29.926: INFO: Got endpoints: latency-svc-bm4cn [1.257914299s] May 15 13:00:29.944: INFO: Created: latency-svc-bk6c7 May 15 13:00:29.977: INFO: Got endpoints: latency-svc-bk6c7 [1.166172907s] May 15 13:00:29.993: INFO: Created: latency-svc-kfn29 May 15 13:00:30.006: INFO: Got endpoints: latency-svc-kfn29 [1.09647068s] May 15 13:00:30.035: INFO: Created: latency-svc-j5jl6 May 15 13:00:30.047: INFO: Got endpoints: latency-svc-j5jl6 [1.089411957s] May 15 13:00:30.072: INFO: Created: latency-svc-cp5tp May 15 13:00:30.074: INFO: Got endpoints: latency-svc-cp5tp [1.000396111s] May 15 13:00:30.128: INFO: Created: latency-svc-b4fjg May 15 13:00:30.131: INFO: Got endpoints: latency-svc-b4fjg [1.005200999s] May 15 13:00:30.155: INFO: Created: latency-svc-xl5cr May 15 13:00:30.168: INFO: Got endpoints: latency-svc-xl5cr [952.72163ms] May 15 13:00:30.202: INFO: Created: latency-svc-brbww May 15 13:00:30.312: INFO: Got endpoints: latency-svc-brbww [1.011871481s] May 15 13:00:30.359: INFO: Created: latency-svc-lh2bv May 15 13:00:30.415: INFO: Got endpoints: latency-svc-lh2bv [991.946852ms] May 15 13:00:30.455: INFO: Created: latency-svc-cwggg May 15 13:00:30.481: INFO: Got endpoints: latency-svc-cwggg [1.009951551s] May 15 13:00:30.567: INFO: Created: latency-svc-v7cd6 May 15 13:00:30.568: INFO: Got endpoints: latency-svc-v7cd6 [953.510847ms] May 15 13:00:30.606: INFO: Created: latency-svc-pdd6m May 15 13:00:30.619: INFO: Got endpoints: latency-svc-pdd6m [866.509001ms] May 15 13:00:30.647: INFO: Created: latency-svc-nzs9m May 15 13:00:30.661: INFO: Got endpoints: latency-svc-nzs9m [878.866265ms] May 15 13:00:30.707: INFO: Created: latency-svc-bh5g9 May 15 13:00:30.722: INFO: Got endpoints: latency-svc-bh5g9 [869.808796ms] May 15 13:00:30.755: INFO: Created: latency-svc-bfjp7 May 15 13:00:30.770: INFO: Got endpoints: latency-svc-bfjp7 [874.193921ms] May 15 13:00:30.791: INFO: Created: latency-svc-4fmxd May 15 13:00:30.853: INFO: Got endpoints: latency-svc-4fmxd [926.430789ms] May 15 13:00:30.887: INFO: Created: latency-svc-q4d4c May 15 13:00:30.897: INFO: Got endpoints: latency-svc-q4d4c [919.590405ms] May 15 13:00:30.917: INFO: Created: latency-svc-pcwmq May 15 13:00:30.927: INFO: Got endpoints: latency-svc-pcwmq [920.597038ms] May 15 13:00:31.015: INFO: Created: latency-svc-xctnt May 15 13:00:31.036: INFO: Got endpoints: latency-svc-xctnt [989.07912ms] May 15 13:00:31.037: INFO: Created: latency-svc-8z5p5 May 15 13:00:31.061: INFO: Got endpoints: latency-svc-8z5p5 [987.004903ms] May 15 13:00:31.091: INFO: Created: latency-svc-7t6db May 15 13:00:31.109: INFO: Got endpoints: latency-svc-7t6db [977.823745ms] May 15 13:00:31.187: INFO: Created: latency-svc-tpm6s May 15 13:00:31.212: INFO: Got endpoints: latency-svc-tpm6s [1.043647948s] May 15 13:00:31.254: INFO: Created: latency-svc-k2zzf May 15 13:00:31.271: INFO: Got endpoints: latency-svc-k2zzf [958.663804ms] May 15 13:00:31.331: INFO: Created: latency-svc-hrhn7 May 15 13:00:31.348: INFO: Got endpoints: latency-svc-hrhn7 [933.368487ms] May 15 13:00:31.379: INFO: Created: latency-svc-l6xfr May 15 13:00:31.396: INFO: Got endpoints: latency-svc-l6xfr [915.731953ms] May 15 13:00:31.421: INFO: Created: latency-svc-s5brf May 15 13:00:31.463: INFO: Got endpoints: latency-svc-s5brf [894.321033ms] May 15 13:00:31.488: INFO: Created: latency-svc-6nkg9 May 15 13:00:31.505: INFO: Got endpoints: latency-svc-6nkg9 [886.032924ms] May 15 13:00:31.541: INFO: Created: latency-svc-f8nmd May 15 13:00:31.553: INFO: Got endpoints: latency-svc-f8nmd [891.796116ms] May 15 13:00:31.595: INFO: Created: latency-svc-x7fhb May 15 13:00:31.619: INFO: Got endpoints: latency-svc-x7fhb [896.63522ms] May 15 13:00:31.620: INFO: Created: latency-svc-psv54 May 15 13:00:31.632: INFO: Got endpoints: latency-svc-psv54 [861.602088ms] May 15 13:00:31.655: INFO: Created: latency-svc-mjx5d May 15 13:00:31.668: INFO: Got endpoints: latency-svc-mjx5d [815.566007ms] May 15 13:00:31.745: INFO: Created: latency-svc-psn8l May 15 13:00:31.747: INFO: Got endpoints: latency-svc-psn8l [849.779485ms] May 15 13:00:31.776: INFO: Created: latency-svc-s69wf May 15 13:00:31.789: INFO: Got endpoints: latency-svc-s69wf [862.052536ms] May 15 13:00:31.836: INFO: Created: latency-svc-c2jz7 May 15 13:00:31.888: INFO: Got endpoints: latency-svc-c2jz7 [851.390975ms] May 15 13:00:31.901: INFO: Created: latency-svc-828km May 15 13:00:31.915: INFO: Got endpoints: latency-svc-828km [854.277324ms] May 15 13:00:31.943: INFO: Created: latency-svc-7gcm6 May 15 13:00:31.972: INFO: Got endpoints: latency-svc-7gcm6 [863.036946ms] May 15 13:00:32.032: INFO: Created: latency-svc-nmxn4 May 15 13:00:32.063: INFO: Got endpoints: latency-svc-nmxn4 [851.073902ms] May 15 13:00:32.099: INFO: Created: latency-svc-9fqvs May 15 13:00:32.199: INFO: Got endpoints: latency-svc-9fqvs [928.375015ms] May 15 13:00:32.203: INFO: Created: latency-svc-w5vpc May 15 13:00:32.210: INFO: Got endpoints: latency-svc-w5vpc [861.870635ms] May 15 13:00:32.237: INFO: Created: latency-svc-47sd9 May 15 13:00:32.253: INFO: Got endpoints: latency-svc-47sd9 [855.992158ms] May 15 13:00:32.273: INFO: Created: latency-svc-j9lp7 May 15 13:00:32.289: INFO: Got endpoints: latency-svc-j9lp7 [825.974207ms] May 15 13:00:32.340: INFO: Created: latency-svc-vwj94 May 15 13:00:32.357: INFO: Got endpoints: latency-svc-vwj94 [852.185449ms] May 15 13:00:32.393: INFO: Created: latency-svc-7w4fm May 15 13:00:32.415: INFO: Got endpoints: latency-svc-7w4fm [861.992541ms] May 15 13:00:32.459: INFO: Created: latency-svc-4bmhn May 15 13:00:32.463: INFO: Got endpoints: latency-svc-4bmhn [844.682279ms] May 15 13:00:32.489: INFO: Created: latency-svc-f8498 May 15 13:00:32.499: INFO: Got endpoints: latency-svc-f8498 [867.320151ms] May 15 13:00:32.519: INFO: Created: latency-svc-p8fhs May 15 13:00:32.536: INFO: Got endpoints: latency-svc-p8fhs [867.613481ms] May 15 13:00:32.601: INFO: Created: latency-svc-nwds6 May 15 13:00:32.626: INFO: Got endpoints: latency-svc-nwds6 [879.460068ms] May 15 13:00:32.627: INFO: Created: latency-svc-n6x2n May 15 13:00:32.639: INFO: Got endpoints: latency-svc-n6x2n [849.587569ms] May 15 13:00:32.662: INFO: Created: latency-svc-ng442 May 15 13:00:32.675: INFO: Got endpoints: latency-svc-ng442 [786.969225ms] May 15 13:00:32.699: INFO: Created: latency-svc-56tdb May 15 13:00:32.762: INFO: Got endpoints: latency-svc-56tdb [846.608698ms] May 15 13:00:32.777: INFO: Created: latency-svc-75m56 May 15 13:00:32.790: INFO: Got endpoints: latency-svc-75m56 [817.229983ms] May 15 13:00:32.972: INFO: Created: latency-svc-xcn9j May 15 13:00:32.976: INFO: Got endpoints: latency-svc-xcn9j [912.852279ms] May 15 13:00:33.167: INFO: Created: latency-svc-25cgd May 15 13:00:33.179: INFO: Got endpoints: latency-svc-25cgd [979.881288ms] May 15 13:00:33.202: INFO: Created: latency-svc-s8mh2 May 15 13:00:33.216: INFO: Got endpoints: latency-svc-s8mh2 [1.005690658s] May 15 13:00:33.269: INFO: Created: latency-svc-l8855 May 15 13:00:33.270: INFO: Got endpoints: latency-svc-l8855 [1.017739456s] May 15 13:00:33.340: INFO: Created: latency-svc-lbfk5 May 15 13:00:33.361: INFO: Got endpoints: latency-svc-lbfk5 [1.071951628s] May 15 13:00:33.415: INFO: Created: latency-svc-nhjqg May 15 13:00:33.420: INFO: Got endpoints: latency-svc-nhjqg [1.062490496s] May 15 13:00:33.442: INFO: Created: latency-svc-x95rd May 15 13:00:33.456: INFO: Got endpoints: latency-svc-x95rd [1.041064319s] May 15 13:00:33.484: INFO: Created: latency-svc-ql9gg May 15 13:00:33.493: INFO: Got endpoints: latency-svc-ql9gg [1.029046485s] May 15 13:00:33.560: INFO: Created: latency-svc-46bgv May 15 13:00:33.565: INFO: Got endpoints: latency-svc-46bgv [1.066061206s] May 15 13:00:33.592: INFO: Created: latency-svc-pc7gg May 15 13:00:33.607: INFO: Got endpoints: latency-svc-pc7gg [1.071202804s] May 15 13:00:33.629: INFO: Created: latency-svc-lbtt2 May 15 13:00:33.659: INFO: Got endpoints: latency-svc-lbtt2 [1.032179288s] May 15 13:00:33.714: INFO: Created: latency-svc-9tc6q May 15 13:00:33.730: INFO: Got endpoints: latency-svc-9tc6q [1.09145842s] May 15 13:00:33.760: INFO: Created: latency-svc-5fqhq May 15 13:00:33.777: INFO: Got endpoints: latency-svc-5fqhq [1.102076672s] May 15 13:00:33.796: INFO: Created: latency-svc-jt2bz May 15 13:00:33.812: INFO: Got endpoints: latency-svc-jt2bz [1.05057455s] May 15 13:00:33.858: INFO: Created: latency-svc-qrnwc May 15 13:00:33.860: INFO: Got endpoints: latency-svc-qrnwc [1.070793959s] May 15 13:00:33.886: INFO: Created: latency-svc-9276x May 15 13:00:33.903: INFO: Got endpoints: latency-svc-9276x [927.640963ms] May 15 13:00:33.922: INFO: Created: latency-svc-xljws May 15 13:00:33.934: INFO: Got endpoints: latency-svc-xljws [754.437359ms] May 15 13:00:33.952: INFO: Created: latency-svc-95h9p May 15 13:00:34.014: INFO: Got endpoints: latency-svc-95h9p [797.783296ms] May 15 13:00:34.016: INFO: Created: latency-svc-hg84p May 15 13:00:34.018: INFO: Got endpoints: latency-svc-hg84p [747.194285ms] May 15 13:00:34.042: INFO: Created: latency-svc-ghd2d May 15 13:00:34.061: INFO: Got endpoints: latency-svc-ghd2d [700.492909ms] May 15 13:00:34.078: INFO: Created: latency-svc-tgxmt May 15 13:00:34.097: INFO: Got endpoints: latency-svc-tgxmt [676.605775ms] May 15 13:00:34.156: INFO: Created: latency-svc-28sfw May 15 13:00:34.181: INFO: Got endpoints: latency-svc-28sfw [724.643804ms] May 15 13:00:34.235: INFO: Created: latency-svc-zljcf May 15 13:00:34.271: INFO: Got endpoints: latency-svc-zljcf [778.181905ms] May 15 13:00:34.282: INFO: Created: latency-svc-jqzmk May 15 13:00:34.295: INFO: Got endpoints: latency-svc-jqzmk [729.683062ms] May 15 13:00:34.295: INFO: Latencies: [73.982945ms 120.581614ms 185.184502ms 305.748914ms 320.287474ms 350.208684ms 411.554929ms 498.590676ms 565.638055ms 632.28174ms 676.605775ms 699.989916ms 700.492909ms 700.630295ms 706.187859ms 708.367996ms 712.475673ms 716.544658ms 719.996156ms 722.533106ms 724.489151ms 724.643804ms 725.441598ms 729.683062ms 734.06465ms 735.6318ms 746.332399ms 747.194285ms 748.015297ms 748.506935ms 748.636908ms 749.49349ms 753.633034ms 754.437359ms 757.319588ms 759.395044ms 760.59989ms 765.021771ms 765.321346ms 765.852629ms 765.906365ms 771.615581ms 772.5408ms 772.661229ms 777.884691ms 778.181432ms 778.181905ms 778.202806ms 778.891896ms 779.24832ms 783.76482ms 783.918376ms 783.93542ms 785.167343ms 786.969225ms 790.34118ms 797.178486ms 797.783296ms 800.755564ms 801.119351ms 801.797397ms 802.059314ms 803.058611ms 805.936074ms 807.75762ms 813.68491ms 814.030856ms 814.727014ms 815.566007ms 817.229983ms 818.076696ms 818.480803ms 820.099515ms 821.948982ms 822.568155ms 823.875908ms 825.625746ms 825.974207ms 826.852415ms 828.969434ms 831.699812ms 837.494826ms 842.342151ms 844.682279ms 845.013717ms 846.608698ms 849.587569ms 849.779485ms 850.267051ms 851.073902ms 851.300458ms 851.390975ms 852.185449ms 854.277324ms 855.992158ms 858.074573ms 861.602088ms 861.870635ms 861.992541ms 862.051971ms 862.052536ms 863.036946ms 863.363062ms 866.509001ms 867.320151ms 867.613481ms 869.808796ms 874.193921ms 875.942988ms 877.22834ms 878.866265ms 879.460068ms 886.032924ms 891.796116ms 892.173165ms 894.321033ms 896.63522ms 901.917ms 907.529375ms 912.852279ms 913.970827ms 915.731953ms 919.548995ms 919.590405ms 920.162367ms 920.597038ms 921.05523ms 925.329887ms 926.430789ms 927.640963ms 928.375015ms 933.368487ms 935.311496ms 952.72163ms 953.510847ms 958.663804ms 965.451859ms 967.280748ms 968.647776ms 977.823745ms 979.881288ms 987.004903ms 989.07912ms 991.946852ms 994.549956ms 1.000396111s 1.005200999s 1.005690658s 1.009951551s 1.011871481s 1.017739456s 1.021939108s 1.02701607s 1.027538834s 1.029046485s 1.032179288s 1.041064319s 1.043647948s 1.05057455s 1.054679826s 1.054756411s 1.062490496s 1.065924985s 1.066061206s 1.070793959s 1.071202804s 1.071951628s 1.07544604s 1.089411957s 1.09145842s 1.095941795s 1.09647068s 1.102076672s 1.113074517s 1.161392408s 1.166172907s 1.180992817s 1.184676781s 1.186026044s 1.195172867s 1.195504076s 1.203682991s 1.205821054s 1.22006267s 1.223214424s 1.225228144s 1.225654598s 1.225803951s 1.226003654s 1.233970437s 1.243154871s 1.257914299s 1.275091228s 1.313079045s 1.313396564s 1.420763189s 1.424636057s 1.434206465s 1.456056448s 1.486422919s] May 15 13:00:34.296: INFO: 50 %ile: 862.052536ms May 15 13:00:34.296: INFO: 90 %ile: 1.195504076s May 15 13:00:34.296: INFO: 99 %ile: 1.456056448s May 15 13:00:34.296: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:00:34.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3111" for this suite. May 15 13:01:00.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:01:00.434: INFO: namespace svc-latency-3111 deletion completed in 26.103875087s • [SLOW TEST:42.722 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:01:00.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1024.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1024.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1024.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 13:01:08.617: INFO: DNS probes using dns-test-8db82844-1b8f-470e-8ed6-bb6fe2b25493 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1024.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1024.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1024.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 13:01:16.740: INFO: File wheezy_udp@dns-test-service-3.dns-1024.svc.cluster.local from pod dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 13:01:16.744: INFO: File jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local from pod dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 13:01:16.744: INFO: Lookups using dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 failed for: [wheezy_udp@dns-test-service-3.dns-1024.svc.cluster.local jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local] May 15 13:01:21.749: INFO: File wheezy_udp@dns-test-service-3.dns-1024.svc.cluster.local from pod dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 13:01:21.753: INFO: File jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local from pod dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 13:01:21.753: INFO: Lookups using dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 failed for: [wheezy_udp@dns-test-service-3.dns-1024.svc.cluster.local jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local] May 15 13:01:26.767: INFO: File wheezy_udp@dns-test-service-3.dns-1024.svc.cluster.local from pod dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 13:01:26.772: INFO: File jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local from pod dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 13:01:26.772: INFO: Lookups using dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 failed for: [wheezy_udp@dns-test-service-3.dns-1024.svc.cluster.local jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local] May 15 13:01:31.753: INFO: File wheezy_udp@dns-test-service-3.dns-1024.svc.cluster.local from pod dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 13:01:31.755: INFO: File jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local from pod dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 13:01:31.755: INFO: Lookups using dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 failed for: [wheezy_udp@dns-test-service-3.dns-1024.svc.cluster.local jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local] May 15 13:01:36.753: INFO: File jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local from pod dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 13:01:36.753: INFO: Lookups using dns-1024/dns-test-27125769-c105-4822-8f30-a1cdb0d43074 failed for: [jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local] May 15 13:01:41.752: INFO: DNS probes using dns-test-27125769-c105-4822-8f30-a1cdb0d43074 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1024.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1024.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1024.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1024.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 13:01:50.480: INFO: DNS probes using dns-test-a31c09ee-212d-408b-bdba-d38ce55190fb succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:01:50.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1024" for this suite. May 15 13:01:56.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:01:56.646: INFO: namespace dns-1024 deletion completed in 6.08355464s • [SLOW TEST:56.211 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:01:56.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 15 13:01:56.766: INFO: Waiting up to 5m0s for pod "pod-68c584d0-d404-4397-9e4c-cb89d733c7aa" in namespace "emptydir-724" to be "success or failure" May 15 13:01:56.771: INFO: Pod "pod-68c584d0-d404-4397-9e4c-cb89d733c7aa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.414753ms May 15 13:01:58.776: INFO: Pod "pod-68c584d0-d404-4397-9e4c-cb89d733c7aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010075718s May 15 13:02:00.780: INFO: Pod "pod-68c584d0-d404-4397-9e4c-cb89d733c7aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013773557s STEP: Saw pod success May 15 13:02:00.780: INFO: Pod "pod-68c584d0-d404-4397-9e4c-cb89d733c7aa" satisfied condition "success or failure" May 15 13:02:00.782: INFO: Trying to get logs from node iruya-worker2 pod pod-68c584d0-d404-4397-9e4c-cb89d733c7aa container test-container: STEP: delete the pod May 15 13:02:00.896: INFO: Waiting for pod pod-68c584d0-d404-4397-9e4c-cb89d733c7aa to disappear May 15 13:02:00.900: INFO: Pod pod-68c584d0-d404-4397-9e4c-cb89d733c7aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:02:00.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-724" for this suite. May 15 13:02:07.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:02:07.152: INFO: namespace emptydir-724 deletion completed in 6.188328611s • [SLOW TEST:10.506 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:02:07.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:02:07.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2cec5a8d-74ce-4db1-bebc-9a4449a8e2cf" in namespace "downward-api-4085" to be "success or failure" May 15 13:02:07.263: INFO: Pod "downwardapi-volume-2cec5a8d-74ce-4db1-bebc-9a4449a8e2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 46.185826ms May 15 13:02:09.266: INFO: Pod "downwardapi-volume-2cec5a8d-74ce-4db1-bebc-9a4449a8e2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049333789s May 15 13:02:11.290: INFO: Pod "downwardapi-volume-2cec5a8d-74ce-4db1-bebc-9a4449a8e2cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072969057s STEP: Saw pod success May 15 13:02:11.290: INFO: Pod "downwardapi-volume-2cec5a8d-74ce-4db1-bebc-9a4449a8e2cf" satisfied condition "success or failure" May 15 13:02:11.293: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2cec5a8d-74ce-4db1-bebc-9a4449a8e2cf container client-container: STEP: delete the pod May 15 13:02:11.332: INFO: Waiting for pod downwardapi-volume-2cec5a8d-74ce-4db1-bebc-9a4449a8e2cf to disappear May 15 13:02:11.346: INFO: Pod downwardapi-volume-2cec5a8d-74ce-4db1-bebc-9a4449a8e2cf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:02:11.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4085" for this suite. May 15 13:02:17.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:02:17.451: INFO: namespace downward-api-4085 deletion completed in 6.102162324s • [SLOW TEST:10.297 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:02:17.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 15 13:02:17.500: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 13:02:17.519: INFO: Waiting for terminating namespaces to be deleted... May 15 13:02:17.521: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 15 13:02:17.526: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 15 13:02:17.526: INFO: Container kube-proxy ready: true, restart count 0 May 15 13:02:17.526: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 15 13:02:17.526: INFO: Container kindnet-cni ready: true, restart count 0 May 15 13:02:17.526: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 15 13:02:17.532: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 15 13:02:17.532: INFO: Container coredns ready: true, restart count 0 May 15 13:02:17.532: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 15 13:02:17.532: INFO: Container coredns ready: true, restart count 0 May 15 13:02:17.532: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 15 13:02:17.532: INFO: Container kube-proxy ready: true, restart count 0 May 15 13:02:17.532: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 15 13:02:17.532: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c02eb42a-e70f-4b3f-936d-4d0e9c824b3f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c02eb42a-e70f-4b3f-936d-4d0e9c824b3f off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c02eb42a-e70f-4b3f-936d-4d0e9c824b3f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:02:26.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8366" for this suite. May 15 13:02:36.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:02:36.383: INFO: namespace sched-pred-8366 deletion completed in 10.09113888s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:18.932 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:02:36.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:02:36.450: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8fb1f4f-01ca-46bc-9542-c70acc1bc803" in namespace "projected-6877" to be "success or failure" May 15 13:02:36.467: INFO: Pod "downwardapi-volume-d8fb1f4f-01ca-46bc-9542-c70acc1bc803": Phase="Pending", Reason="", readiness=false. Elapsed: 17.374761ms May 15 13:02:38.472: INFO: Pod "downwardapi-volume-d8fb1f4f-01ca-46bc-9542-c70acc1bc803": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022551179s May 15 13:02:40.476: INFO: Pod "downwardapi-volume-d8fb1f4f-01ca-46bc-9542-c70acc1bc803": Phase="Running", Reason="", readiness=true. Elapsed: 4.02628071s May 15 13:02:42.481: INFO: Pod "downwardapi-volume-d8fb1f4f-01ca-46bc-9542-c70acc1bc803": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030929135s STEP: Saw pod success May 15 13:02:42.481: INFO: Pod "downwardapi-volume-d8fb1f4f-01ca-46bc-9542-c70acc1bc803" satisfied condition "success or failure" May 15 13:02:42.486: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d8fb1f4f-01ca-46bc-9542-c70acc1bc803 container client-container: STEP: delete the pod May 15 13:02:42.557: INFO: Waiting for pod downwardapi-volume-d8fb1f4f-01ca-46bc-9542-c70acc1bc803 to disappear May 15 13:02:42.562: INFO: Pod downwardapi-volume-d8fb1f4f-01ca-46bc-9542-c70acc1bc803 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:02:42.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6877" for this suite. May 15 13:02:48.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:02:48.643: INFO: namespace projected-6877 deletion completed in 6.077878139s • [SLOW TEST:12.260 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:02:48.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 13:02:48.721: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 15 13:02:48.746: INFO: Pod name sample-pod: Found 0 pods out of 1 May 15 13:02:53.751: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 15 13:02:53.751: INFO: Creating deployment "test-rolling-update-deployment" May 15 13:02:53.756: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 15 13:02:53.764: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 15 13:02:55.772: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 15 13:02:55.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144573, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144573, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144573, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144573, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 13:02:57.825: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 15 13:02:57.836: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3816,SelfLink:/apis/apps/v1/namespaces/deployment-3816/deployments/test-rolling-update-deployment,UID:6f388601-63d1-4f75-b4d7-f5bd72c5eb15,ResourceVersion:11033137,Generation:1,CreationTimestamp:2020-05-15 13:02:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-15 13:02:53 +0000 UTC 2020-05-15 13:02:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-15 13:02:57 +0000 UTC 2020-05-15 13:02:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 15 13:02:57.840: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3816,SelfLink:/apis/apps/v1/namespaces/deployment-3816/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:5f2b4310-88d7-4ed7-ac4e-6ec39f022bd5,ResourceVersion:11033126,Generation:1,CreationTimestamp:2020-05-15 13:02:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6f388601-63d1-4f75-b4d7-f5bd72c5eb15 0xc00327b9c7 0xc00327b9c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 15 13:02:57.840: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 15 13:02:57.840: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3816,SelfLink:/apis/apps/v1/namespaces/deployment-3816/replicasets/test-rolling-update-controller,UID:b732b127-1495-4252-a3ed-94e21caaccb6,ResourceVersion:11033136,Generation:2,CreationTimestamp:2020-05-15 13:02:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6f388601-63d1-4f75-b4d7-f5bd72c5eb15 0xc00327b8f7 0xc00327b8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 15 13:02:57.844: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-p54zc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-p54zc,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3816,SelfLink:/api/v1/namespaces/deployment-3816/pods/test-rolling-update-deployment-79f6b9d75c-p54zc,UID:88c78580-fa3c-48e4-86f8-3fd5db945b8b,ResourceVersion:11033125,Generation:0,CreationTimestamp:2020-05-15 13:02:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 5f2b4310-88d7-4ed7-ac4e-6ec39f022bd5 0xc00328c287 0xc00328c288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8hshj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8hshj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-8hshj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00328c300} {node.kubernetes.io/unreachable Exists NoExecute 0xc00328c320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:02:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:02:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:02:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:02:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.192,StartTime:2020-05-15 13:02:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-15 13:02:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://8c746e28cfde741d32f7c15ba6224072e36cc6626aeeba9830c8444b220b263f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:02:57.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3816" for this suite. May 15 13:03:03.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:03:03.959: INFO: namespace deployment-3816 deletion completed in 6.111557524s • [SLOW TEST:15.316 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:03:03.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 15 13:03:04.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4440 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 15 13:03:10.530: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0515 13:03:10.467288 37 log.go:172] (0xc000119130) (0xc000854780) Create stream\nI0515 13:03:10.467353 37 log.go:172] (0xc000119130) (0xc000854780) Stream added, broadcasting: 1\nI0515 13:03:10.470409 37 log.go:172] (0xc000119130) Reply frame received for 1\nI0515 13:03:10.470437 37 log.go:172] (0xc000119130) (0xc000390780) Create stream\nI0515 13:03:10.470445 37 log.go:172] (0xc000119130) (0xc000390780) Stream added, broadcasting: 3\nI0515 13:03:10.471396 37 log.go:172] (0xc000119130) Reply frame received for 3\nI0515 13:03:10.471448 37 log.go:172] (0xc000119130) (0xc000390820) Create stream\nI0515 13:03:10.471476 37 log.go:172] (0xc000119130) (0xc000390820) Stream added, broadcasting: 5\nI0515 13:03:10.472343 37 log.go:172] (0xc000119130) Reply frame received for 5\nI0515 13:03:10.472392 37 log.go:172] (0xc000119130) (0xc000854820) Create stream\nI0515 13:03:10.472413 37 log.go:172] (0xc000119130) (0xc000854820) Stream added, broadcasting: 7\nI0515 13:03:10.473496 37 log.go:172] (0xc000119130) Reply frame received for 7\nI0515 13:03:10.473574 37 log.go:172] (0xc000390780) (3) Writing data frame\nI0515 13:03:10.473707 37 log.go:172] (0xc000390780) (3) Writing data frame\nI0515 13:03:10.474401 37 log.go:172] (0xc000119130) Data frame received for 5\nI0515 13:03:10.474417 37 log.go:172] (0xc000390820) (5) Data frame handling\nI0515 13:03:10.474432 37 log.go:172] (0xc000390820) (5) Data frame sent\nI0515 13:03:10.475158 37 log.go:172] (0xc000119130) Data frame received for 5\nI0515 13:03:10.475180 37 log.go:172] (0xc000390820) (5) Data frame handling\nI0515 13:03:10.475194 37 log.go:172] (0xc000390820) (5) Data frame sent\nI0515 13:03:10.507301 37 log.go:172] (0xc000119130) Data frame received for 7\nI0515 13:03:10.507333 37 log.go:172] (0xc000119130) Data frame received for 5\nI0515 13:03:10.507375 37 log.go:172] (0xc000390820) (5) Data frame handling\nI0515 13:03:10.507401 37 log.go:172] (0xc000854820) (7) Data frame handling\nI0515 13:03:10.507968 37 log.go:172] (0xc000119130) Data frame received for 1\nI0515 13:03:10.508058 37 log.go:172] (0xc000854780) (1) Data frame handling\nI0515 13:03:10.508088 37 log.go:172] (0xc000854780) (1) Data frame sent\nI0515 13:03:10.508109 37 log.go:172] (0xc000119130) (0xc000854780) Stream removed, broadcasting: 1\nI0515 13:03:10.508214 37 log.go:172] (0xc000119130) (0xc000854780) Stream removed, broadcasting: 1\nI0515 13:03:10.508238 37 log.go:172] (0xc000119130) (0xc000390780) Stream removed, broadcasting: 3\nI0515 13:03:10.508255 37 log.go:172] (0xc000119130) (0xc000390820) Stream removed, broadcasting: 5\nI0515 13:03:10.508392 37 log.go:172] (0xc000119130) Go away received\nI0515 13:03:10.508452 37 log.go:172] (0xc000119130) (0xc000854820) Stream removed, broadcasting: 7\n" May 15 13:03:10.530: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:03:12.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4440" for this suite. May 15 13:03:18.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:03:18.634: INFO: namespace kubectl-4440 deletion completed in 6.094410525s • [SLOW TEST:14.675 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:03:18.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 13:03:18.695: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:03:19.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9010" for this suite. May 15 13:03:25.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:03:25.954: INFO: namespace custom-resource-definition-9010 deletion completed in 6.190703807s • [SLOW TEST:7.319 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:03:25.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0515 13:03:37.873713 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 13:03:37.873: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:03:37.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4056" for this suite. May 15 13:03:47.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:03:47.958: INFO: namespace gc-4056 deletion completed in 10.082099558s • [SLOW TEST:22.004 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:03:47.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 13:03:53.092: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:03:53.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1457" for this suite. May 15 13:03:59.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:03:59.286: INFO: namespace container-runtime-1457 deletion completed in 6.116641073s • [SLOW TEST:11.327 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:03:59.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-abad8914-3f11-4a5c-895e-f62b8bd11cb6 STEP: Creating a pod to test consume secrets May 15 13:03:59.346: INFO: Waiting up to 5m0s for pod "pod-secrets-ad37ece1-024d-4892-a301-a349bdefd58d" in namespace "secrets-9416" to be "success or failure" May 15 13:03:59.362: INFO: Pod "pod-secrets-ad37ece1-024d-4892-a301-a349bdefd58d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.964548ms May 15 13:04:01.475: INFO: Pod "pod-secrets-ad37ece1-024d-4892-a301-a349bdefd58d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129362283s May 15 13:04:03.478: INFO: Pod "pod-secrets-ad37ece1-024d-4892-a301-a349bdefd58d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132519293s STEP: Saw pod success May 15 13:04:03.478: INFO: Pod "pod-secrets-ad37ece1-024d-4892-a301-a349bdefd58d" satisfied condition "success or failure" May 15 13:04:03.480: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ad37ece1-024d-4892-a301-a349bdefd58d container secret-volume-test: STEP: delete the pod May 15 13:04:03.524: INFO: Waiting for pod pod-secrets-ad37ece1-024d-4892-a301-a349bdefd58d to disappear May 15 13:04:03.597: INFO: Pod pod-secrets-ad37ece1-024d-4892-a301-a349bdefd58d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:04:03.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9416" for this suite. May 15 13:04:09.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:04:09.750: INFO: namespace secrets-9416 deletion completed in 6.150177002s • [SLOW TEST:10.464 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:04:09.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-565 STEP: creating a selector STEP: Creating the service pods in kubernetes May 15 13:04:09.824: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 15 13:04:37.977: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.201:8080/dial?request=hostName&protocol=http&host=10.244.1.200&port=8080&tries=1'] Namespace:pod-network-test-565 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 13:04:37.977: INFO: >>> kubeConfig: /root/.kube/config I0515 13:04:38.008431 6 log.go:172] (0xc001f4e840) (0xc00053df40) Create stream I0515 13:04:38.008461 6 log.go:172] (0xc001f4e840) (0xc00053df40) Stream added, broadcasting: 1 I0515 13:04:38.010958 6 log.go:172] (0xc001f4e840) Reply frame received for 1 I0515 13:04:38.011015 6 log.go:172] (0xc001f4e840) (0xc00093af00) Create stream I0515 13:04:38.011028 6 log.go:172] (0xc001f4e840) (0xc00093af00) Stream added, broadcasting: 3 I0515 13:04:38.011940 6 log.go:172] (0xc001f4e840) Reply frame received for 3 I0515 13:04:38.011974 6 log.go:172] (0xc001f4e840) (0xc0017ce000) Create stream I0515 13:04:38.011987 6 log.go:172] (0xc001f4e840) (0xc0017ce000) Stream added, broadcasting: 5 I0515 13:04:38.012908 6 log.go:172] (0xc001f4e840) Reply frame received for 5 I0515 13:04:38.144765 6 log.go:172] (0xc001f4e840) Data frame received for 3 I0515 13:04:38.144804 6 log.go:172] (0xc00093af00) (3) Data frame handling I0515 13:04:38.144831 6 log.go:172] (0xc00093af00) (3) Data frame sent I0515 13:04:38.145284 6 log.go:172] (0xc001f4e840) Data frame received for 3 I0515 13:04:38.145307 6 log.go:172] (0xc00093af00) (3) Data frame handling I0515 13:04:38.145358 6 log.go:172] (0xc001f4e840) Data frame received for 5 I0515 13:04:38.145389 6 log.go:172] (0xc0017ce000) (5) Data frame handling I0515 13:04:38.146939 6 log.go:172] (0xc001f4e840) Data frame received for 1 I0515 13:04:38.146958 6 log.go:172] (0xc00053df40) (1) Data frame handling I0515 13:04:38.146971 6 log.go:172] (0xc00053df40) (1) Data frame sent I0515 13:04:38.147180 6 log.go:172] (0xc001f4e840) (0xc00053df40) Stream removed, broadcasting: 1 I0515 13:04:38.147215 6 log.go:172] (0xc001f4e840) Go away received I0515 13:04:38.147303 6 log.go:172] (0xc001f4e840) (0xc00053df40) Stream removed, broadcasting: 1 I0515 13:04:38.147322 6 log.go:172] (0xc001f4e840) (0xc00093af00) Stream removed, broadcasting: 3 I0515 13:04:38.147335 6 log.go:172] (0xc001f4e840) (0xc0017ce000) Stream removed, broadcasting: 5 May 15 13:04:38.147: INFO: Waiting for endpoints: map[] May 15 13:04:38.150: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.201:8080/dial?request=hostName&protocol=http&host=10.244.2.28&port=8080&tries=1'] Namespace:pod-network-test-565 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 13:04:38.150: INFO: >>> kubeConfig: /root/.kube/config I0515 13:04:38.173842 6 log.go:172] (0xc001f4f970) (0xc0017ceb40) Create stream I0515 13:04:38.173876 6 log.go:172] (0xc001f4f970) (0xc0017ceb40) Stream added, broadcasting: 1 I0515 13:04:38.176211 6 log.go:172] (0xc001f4f970) Reply frame received for 1 I0515 13:04:38.176244 6 log.go:172] (0xc001f4f970) (0xc0017cedc0) Create stream I0515 13:04:38.176255 6 log.go:172] (0xc001f4f970) (0xc0017cedc0) Stream added, broadcasting: 3 I0515 13:04:38.176998 6 log.go:172] (0xc001f4f970) Reply frame received for 3 I0515 13:04:38.177035 6 log.go:172] (0xc001f4f970) (0xc001afcb40) Create stream I0515 13:04:38.177060 6 log.go:172] (0xc001f4f970) (0xc001afcb40) Stream added, broadcasting: 5 I0515 13:04:38.178074 6 log.go:172] (0xc001f4f970) Reply frame received for 5 I0515 13:04:38.240136 6 log.go:172] (0xc001f4f970) Data frame received for 3 I0515 13:04:38.240160 6 log.go:172] (0xc0017cedc0) (3) Data frame handling I0515 13:04:38.240176 6 log.go:172] (0xc0017cedc0) (3) Data frame sent I0515 13:04:38.240489 6 log.go:172] (0xc001f4f970) Data frame received for 5 I0515 13:04:38.240507 6 log.go:172] (0xc001afcb40) (5) Data frame handling I0515 13:04:38.240545 6 log.go:172] (0xc001f4f970) Data frame received for 3 I0515 13:04:38.240562 6 log.go:172] (0xc0017cedc0) (3) Data frame handling I0515 13:04:38.242248 6 log.go:172] (0xc001f4f970) Data frame received for 1 I0515 13:04:38.242262 6 log.go:172] (0xc0017ceb40) (1) Data frame handling I0515 13:04:38.242275 6 log.go:172] (0xc0017ceb40) (1) Data frame sent I0515 13:04:38.242332 6 log.go:172] (0xc001f4f970) (0xc0017ceb40) Stream removed, broadcasting: 1 I0515 13:04:38.242401 6 log.go:172] (0xc001f4f970) (0xc0017ceb40) Stream removed, broadcasting: 1 I0515 13:04:38.242411 6 log.go:172] (0xc001f4f970) (0xc0017cedc0) Stream removed, broadcasting: 3 I0515 13:04:38.242418 6 log.go:172] (0xc001f4f970) (0xc001afcb40) Stream removed, broadcasting: 5 May 15 13:04:38.242: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:04:38.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0515 13:04:38.242602 6 log.go:172] (0xc001f4f970) Go away received STEP: Destroying namespace "pod-network-test-565" for this suite. May 15 13:04:50.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:04:50.337: INFO: namespace pod-network-test-565 deletion completed in 12.091187766s • [SLOW TEST:40.587 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:04:50.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 15 13:04:50.458: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:04:50.481: INFO: Number of nodes with available pods: 0 May 15 13:04:50.481: INFO: Node iruya-worker is running more than one daemon pod May 15 13:04:51.485: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:04:51.488: INFO: Number of nodes with available pods: 0 May 15 13:04:51.488: INFO: Node iruya-worker is running more than one daemon pod May 15 13:04:52.485: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:04:52.488: INFO: Number of nodes with available pods: 0 May 15 13:04:52.488: INFO: Node iruya-worker is running more than one daemon pod May 15 13:04:53.486: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:04:53.490: INFO: Number of nodes with available pods: 0 May 15 13:04:53.490: INFO: Node iruya-worker is running more than one daemon pod May 15 13:04:54.533: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:04:54.537: INFO: Number of nodes with available pods: 0 May 15 13:04:54.537: INFO: Node iruya-worker is running more than one daemon pod May 15 13:04:55.486: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:04:55.490: INFO: Number of nodes with available pods: 2 May 15 13:04:55.490: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 15 13:04:55.507: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:04:55.509: INFO: Number of nodes with available pods: 1 May 15 13:04:55.509: INFO: Node iruya-worker2 is running more than one daemon pod May 15 13:04:56.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:04:56.519: INFO: Number of nodes with available pods: 1 May 15 13:04:56.519: INFO: Node iruya-worker2 is running more than one daemon pod May 15 13:04:57.514: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:04:57.518: INFO: Number of nodes with available pods: 1 May 15 13:04:57.518: INFO: Node iruya-worker2 is running more than one daemon pod May 15 13:04:58.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:04:58.519: INFO: Number of nodes with available pods: 1 May 15 13:04:58.519: INFO: Node iruya-worker2 is running more than one daemon pod May 15 13:04:59.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:04:59.518: INFO: Number of nodes with available pods: 1 May 15 13:04:59.518: INFO: Node iruya-worker2 is running more than one daemon pod May 15 13:05:00.514: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:05:00.517: INFO: Number of nodes with available pods: 1 May 15 13:05:00.517: INFO: Node iruya-worker2 is running more than one daemon pod May 15 13:05:01.514: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:05:01.517: INFO: Number of nodes with available pods: 1 May 15 13:05:01.517: INFO: Node iruya-worker2 is running more than one daemon pod May 15 13:05:02.514: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:05:02.517: INFO: Number of nodes with available pods: 1 May 15 13:05:02.518: INFO: Node iruya-worker2 is running more than one daemon pod May 15 13:05:03.514: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:05:03.518: INFO: Number of nodes with available pods: 2 May 15 13:05:03.518: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3467, will wait for the garbage collector to delete the pods May 15 13:05:03.579: INFO: Deleting DaemonSet.extensions daemon-set took: 7.028512ms May 15 13:05:03.880: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.272444ms May 15 13:05:11.915: INFO: Number of nodes with available pods: 0 May 15 13:05:11.915: INFO: Number of running nodes: 0, number of available pods: 0 May 15 13:05:11.922: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3467/daemonsets","resourceVersion":"11033821"},"items":null} May 15 13:05:11.924: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3467/pods","resourceVersion":"11033821"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:05:11.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3467" for this suite. May 15 13:05:17.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:05:18.030: INFO: namespace daemonsets-3467 deletion completed in 6.092605151s • [SLOW TEST:27.692 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:05:18.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 15 13:05:22.629: INFO: Successfully updated pod "annotationupdate0bc735b5-1e32-4303-b16d-5a63e5b0b0ea" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:05:26.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4" for this suite. May 15 13:05:50.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:05:50.750: INFO: namespace projected-4 deletion completed in 24.089206752s • [SLOW TEST:32.720 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:05:50.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-csbn STEP: Creating a pod to test atomic-volume-subpath May 15 13:05:50.833: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-csbn" in namespace "subpath-2010" to be "success or failure" May 15 13:05:50.837: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143949ms May 15 13:05:52.841: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008241557s May 15 13:05:54.844: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Running", Reason="", readiness=true. Elapsed: 4.011226707s May 15 13:05:56.847: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Running", Reason="", readiness=true. Elapsed: 6.013699599s May 15 13:05:58.851: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Running", Reason="", readiness=true. Elapsed: 8.018295408s May 15 13:06:00.856: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Running", Reason="", readiness=true. Elapsed: 10.022842737s May 15 13:06:02.859: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Running", Reason="", readiness=true. Elapsed: 12.026262981s May 15 13:06:04.863: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Running", Reason="", readiness=true. Elapsed: 14.030541379s May 15 13:06:06.866: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Running", Reason="", readiness=true. Elapsed: 16.03322974s May 15 13:06:08.869: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Running", Reason="", readiness=true. Elapsed: 18.036532868s May 15 13:06:10.874: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Running", Reason="", readiness=true. Elapsed: 20.040839409s May 15 13:06:12.887: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Running", Reason="", readiness=true. Elapsed: 22.053985652s May 15 13:06:14.890: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Running", Reason="", readiness=true. Elapsed: 24.057148248s May 15 13:06:16.895: INFO: Pod "pod-subpath-test-downwardapi-csbn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061687172s STEP: Saw pod success May 15 13:06:16.895: INFO: Pod "pod-subpath-test-downwardapi-csbn" satisfied condition "success or failure" May 15 13:06:16.898: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-csbn container test-container-subpath-downwardapi-csbn: STEP: delete the pod May 15 13:06:16.953: INFO: Waiting for pod pod-subpath-test-downwardapi-csbn to disappear May 15 13:06:16.968: INFO: Pod pod-subpath-test-downwardapi-csbn no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-csbn May 15 13:06:16.968: INFO: Deleting pod "pod-subpath-test-downwardapi-csbn" in namespace "subpath-2010" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:06:16.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2010" for this suite. May 15 13:06:23.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:06:23.086: INFO: namespace subpath-2010 deletion completed in 6.112528087s • [SLOW TEST:32.335 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:06:23.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 13:06:23.252: INFO: Create a RollingUpdate DaemonSet May 15 13:06:23.255: INFO: Check that daemon pods launch on every node of the cluster May 15 13:06:23.273: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:06:23.278: INFO: Number of nodes with available pods: 0 May 15 13:06:23.278: INFO: Node iruya-worker is running more than one daemon pod May 15 13:06:24.283: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:06:24.286: INFO: Number of nodes with available pods: 0 May 15 13:06:24.286: INFO: Node iruya-worker is running more than one daemon pod May 15 13:06:25.283: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:06:25.286: INFO: Number of nodes with available pods: 0 May 15 13:06:25.286: INFO: Node iruya-worker is running more than one daemon pod May 15 13:06:26.284: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:06:26.287: INFO: Number of nodes with available pods: 0 May 15 13:06:26.287: INFO: Node iruya-worker is running more than one daemon pod May 15 13:06:27.283: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:06:27.287: INFO: Number of nodes with available pods: 0 May 15 13:06:27.287: INFO: Node iruya-worker is running more than one daemon pod May 15 13:06:28.283: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:06:28.285: INFO: Number of nodes with available pods: 2 May 15 13:06:28.285: INFO: Number of running nodes: 2, number of available pods: 2 May 15 13:06:28.285: INFO: Update the DaemonSet to trigger a rollout May 15 13:06:28.291: INFO: Updating DaemonSet daemon-set May 15 13:06:33.307: INFO: Roll back the DaemonSet before rollout is complete May 15 13:06:33.312: INFO: Updating DaemonSet daemon-set May 15 13:06:33.312: INFO: Make sure DaemonSet rollback is complete May 15 13:06:33.330: INFO: Wrong image for pod: daemon-set-l5n9m. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 15 13:06:33.330: INFO: Pod daemon-set-l5n9m is not available May 15 13:06:33.347: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:06:34.351: INFO: Wrong image for pod: daemon-set-l5n9m. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 15 13:06:34.351: INFO: Pod daemon-set-l5n9m is not available May 15 13:06:34.355: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 13:06:35.352: INFO: Pod daemon-set-qtxpt is not available May 15 13:06:35.355: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7336, will wait for the garbage collector to delete the pods May 15 13:06:35.419: INFO: Deleting DaemonSet.extensions daemon-set took: 5.984613ms May 15 13:06:35.721: INFO: Terminating DaemonSet.extensions daemon-set pods took: 302.148294ms May 15 13:06:39.527: INFO: Number of nodes with available pods: 0 May 15 13:06:39.527: INFO: Number of running nodes: 0, number of available pods: 0 May 15 13:06:39.528: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7336/daemonsets","resourceVersion":"11034142"},"items":null} May 15 13:06:39.530: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7336/pods","resourceVersion":"11034142"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:06:39.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7336" for this suite. May 15 13:06:45.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:06:45.607: INFO: namespace daemonsets-7336 deletion completed in 6.066261165s • [SLOW TEST:22.520 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:06:45.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-61a43aa5-7fc1-4c5e-82c3-234e26cdbeb0 STEP: Creating configMap with name cm-test-opt-upd-3ca6cda7-d36b-42c8-9e9d-50bd861a0d80 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-61a43aa5-7fc1-4c5e-82c3-234e26cdbeb0 STEP: Updating configmap cm-test-opt-upd-3ca6cda7-d36b-42c8-9e9d-50bd861a0d80 STEP: Creating configMap with name cm-test-opt-create-dbe47254-235f-4dfb-aee3-89a93bbf3494 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:06:55.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4540" for this suite. May 15 13:07:19.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:07:19.931: INFO: namespace projected-4540 deletion completed in 24.079800411s • [SLOW TEST:34.324 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:07:19.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:07:24.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6683" for this suite. May 15 13:07:30.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:07:30.345: INFO: namespace emptydir-wrapper-6683 deletion completed in 6.115679532s • [SLOW TEST:10.414 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:07:30.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 15 13:07:34.993: INFO: Successfully updated pod "pod-update-669a0879-a581-4313-9062-492c09205e43" STEP: verifying the updated pod is in kubernetes May 15 13:07:35.015: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:07:35.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-559" for this suite. May 15 13:07:57.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:07:57.104: INFO: namespace pods-559 deletion completed in 22.086027461s • [SLOW TEST:26.759 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:07:57.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 13:07:57.600: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 15 13:08:02.606: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 15 13:08:02.606: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 15 13:08:04.610: INFO: Creating deployment "test-rollover-deployment" May 15 13:08:04.618: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 15 13:08:06.624: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 15 13:08:06.631: INFO: Ensure that both replica sets have 1 created replica May 15 13:08:06.638: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 15 13:08:06.673: INFO: Updating deployment test-rollover-deployment May 15 13:08:06.673: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 15 13:08:08.697: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 15 13:08:08.704: INFO: Make sure deployment "test-rollover-deployment" is complete May 15 13:08:08.710: INFO: all replica sets need to contain the pod-template-hash label May 15 13:08:08.710: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144886, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 13:08:10.716: INFO: all replica sets need to contain the pod-template-hash label May 15 13:08:10.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144890, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 13:08:12.754: INFO: all replica sets need to contain the pod-template-hash label May 15 13:08:12.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144890, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 13:08:14.719: INFO: all replica sets need to contain the pod-template-hash label May 15 13:08:14.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144890, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 13:08:16.718: INFO: all replica sets need to contain the pod-template-hash label May 15 13:08:16.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144890, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 13:08:18.718: INFO: all replica sets need to contain the pod-template-hash label May 15 13:08:18.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144890, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 13:08:20.729: INFO: May 15 13:08:20.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144900, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725144884, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 13:08:22.717: INFO: May 15 13:08:22.717: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 15 13:08:22.724: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-141,SelfLink:/apis/apps/v1/namespaces/deployment-141/deployments/test-rollover-deployment,UID:9e199d04-6ad5-4830-9197-e327807bd9b0,ResourceVersion:11034541,Generation:2,CreationTimestamp:2020-05-15 13:08:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-15 13:08:04 +0000 UTC 2020-05-15 13:08:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-15 13:08:20 +0000 UTC 2020-05-15 13:08:04 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 15 13:08:22.726: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-141,SelfLink:/apis/apps/v1/namespaces/deployment-141/replicasets/test-rollover-deployment-854595fc44,UID:a4f6bec1-1a55-4daf-9850-c927528be510,ResourceVersion:11034530,Generation:2,CreationTimestamp:2020-05-15 13:08:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9e199d04-6ad5-4830-9197-e327807bd9b0 0xc001a41067 0xc001a41068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 15 13:08:22.726: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 15 13:08:22.726: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-141,SelfLink:/apis/apps/v1/namespaces/deployment-141/replicasets/test-rollover-controller,UID:ae89bded-d28d-40e6-afb2-4de16565aa41,ResourceVersion:11034540,Generation:2,CreationTimestamp:2020-05-15 13:07:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9e199d04-6ad5-4830-9197-e327807bd9b0 0xc001a40f97 0xc001a40f98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 15 13:08:22.726: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-141,SelfLink:/apis/apps/v1/namespaces/deployment-141/replicasets/test-rollover-deployment-9b8b997cf,UID:f6cf62a8-58a8-4cee-8d2a-b366c1580559,ResourceVersion:11034493,Generation:2,CreationTimestamp:2020-05-15 13:08:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9e199d04-6ad5-4830-9197-e327807bd9b0 0xc001a41130 0xc001a41131}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 15 13:08:22.729: INFO: Pod "test-rollover-deployment-854595fc44-nbzwx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-nbzwx,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-141,SelfLink:/api/v1/namespaces/deployment-141/pods/test-rollover-deployment-854595fc44-nbzwx,UID:a94df1ab-4bdd-4c28-9959-d16f22db882e,ResourceVersion:11034508,Generation:0,CreationTimestamp:2020-05-15 13:08:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 a4f6bec1-1a55-4daf-9850-c927528be510 0xc001a41d07 0xc001a41d08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2kvvq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2kvvq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-2kvvq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a41d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a41da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:08:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:08:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:08:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:08:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.209,StartTime:2020-05-15 13:08:06 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-15 13:08:09 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3616761d4653c220c2b4a2d2f0511ae9f74b87c4fef294dcf899d1a85bd39b7d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:08:22.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-141" for this suite. May 15 13:08:30.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:08:30.825: INFO: namespace deployment-141 deletion completed in 8.093388823s • [SLOW TEST:33.720 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:08:30.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 15 13:08:30.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2170' May 15 13:08:31.011: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 15 13:08:31.011: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 15 13:08:31.050: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-j65tp] May 15 13:08:31.050: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-j65tp" in namespace "kubectl-2170" to be "running and ready" May 15 13:08:31.067: INFO: Pod "e2e-test-nginx-rc-j65tp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.573151ms May 15 13:08:33.126: INFO: Pod "e2e-test-nginx-rc-j65tp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075361216s May 15 13:08:35.129: INFO: Pod "e2e-test-nginx-rc-j65tp": Phase="Running", Reason="", readiness=true. Elapsed: 4.07906662s May 15 13:08:35.129: INFO: Pod "e2e-test-nginx-rc-j65tp" satisfied condition "running and ready" May 15 13:08:35.129: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-j65tp] May 15 13:08:35.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-2170' May 15 13:08:35.239: INFO: stderr: "" May 15 13:08:35.239: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 15 13:08:35.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2170' May 15 13:08:35.390: INFO: stderr: "" May 15 13:08:35.390: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:08:35.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2170" for this suite. May 15 13:08:57.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:08:57.492: INFO: namespace kubectl-2170 deletion completed in 22.094847454s • [SLOW TEST:26.667 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:08:57.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 15 13:08:57.544: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:09:05.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7088" for this suite. May 15 13:09:11.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:09:11.856: INFO: namespace init-container-7088 deletion completed in 6.082089059s • [SLOW TEST:14.363 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:09:11.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8d9b5dcb-8a84-4660-91e2-d6ea3ed6f2ae STEP: Creating a pod to test consume secrets May 15 13:09:12.011: INFO: Waiting up to 5m0s for pod "pod-secrets-346853f4-ef66-4761-a70f-1dc6bdc2e99b" in namespace "secrets-1540" to be "success or failure" May 15 13:09:12.014: INFO: Pod "pod-secrets-346853f4-ef66-4761-a70f-1dc6bdc2e99b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.215049ms May 15 13:09:14.018: INFO: Pod "pod-secrets-346853f4-ef66-4761-a70f-1dc6bdc2e99b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00693094s May 15 13:09:16.023: INFO: Pod "pod-secrets-346853f4-ef66-4761-a70f-1dc6bdc2e99b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011563588s STEP: Saw pod success May 15 13:09:16.023: INFO: Pod "pod-secrets-346853f4-ef66-4761-a70f-1dc6bdc2e99b" satisfied condition "success or failure" May 15 13:09:16.026: INFO: Trying to get logs from node iruya-worker pod pod-secrets-346853f4-ef66-4761-a70f-1dc6bdc2e99b container secret-volume-test: STEP: delete the pod May 15 13:09:16.111: INFO: Waiting for pod pod-secrets-346853f4-ef66-4761-a70f-1dc6bdc2e99b to disappear May 15 13:09:16.117: INFO: Pod pod-secrets-346853f4-ef66-4761-a70f-1dc6bdc2e99b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:09:16.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1540" for this suite. May 15 13:09:22.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:09:22.337: INFO: namespace secrets-1540 deletion completed in 6.216078338s STEP: Destroying namespace "secret-namespace-3303" for this suite. May 15 13:09:28.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:09:28.421: INFO: namespace secret-namespace-3303 deletion completed in 6.084852371s • [SLOW TEST:16.565 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:09:28.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 15 13:09:28.515: INFO: Waiting up to 5m0s for pod "pod-b5b6c251-f881-41c4-899f-fdb475f39612" in namespace "emptydir-1364" to be "success or failure" May 15 13:09:28.536: INFO: Pod "pod-b5b6c251-f881-41c4-899f-fdb475f39612": Phase="Pending", Reason="", readiness=false. Elapsed: 21.58486ms May 15 13:09:30.680: INFO: Pod "pod-b5b6c251-f881-41c4-899f-fdb475f39612": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165069447s May 15 13:09:32.697: INFO: Pod "pod-b5b6c251-f881-41c4-899f-fdb475f39612": Phase="Running", Reason="", readiness=true. Elapsed: 4.182626817s May 15 13:09:34.702: INFO: Pod "pod-b5b6c251-f881-41c4-899f-fdb475f39612": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.187307411s STEP: Saw pod success May 15 13:09:34.702: INFO: Pod "pod-b5b6c251-f881-41c4-899f-fdb475f39612" satisfied condition "success or failure" May 15 13:09:34.705: INFO: Trying to get logs from node iruya-worker2 pod pod-b5b6c251-f881-41c4-899f-fdb475f39612 container test-container: STEP: delete the pod May 15 13:09:34.727: INFO: Waiting for pod pod-b5b6c251-f881-41c4-899f-fdb475f39612 to disappear May 15 13:09:34.753: INFO: Pod pod-b5b6c251-f881-41c4-899f-fdb475f39612 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:09:34.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1364" for this suite. May 15 13:09:40.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:09:40.864: INFO: namespace emptydir-1364 deletion completed in 6.088411611s • [SLOW TEST:12.442 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:09:40.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 15 13:09:40.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7113' May 15 13:09:41.051: INFO: stderr: "" May 15 13:09:41.051: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 15 13:09:46.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7113 -o json' May 15 13:09:46.200: INFO: stderr: "" May 15 13:09:46.200: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-15T13:09:41Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-7113\",\n \"resourceVersion\": \"11034882\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7113/pods/e2e-test-nginx-pod\",\n \"uid\": \"b7e8e00b-241f-496b-81be-84db2d674594\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-4zw8n\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-4zw8n\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-4zw8n\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T13:09:41Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T13:09:44Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T13:09:44Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T13:09:41Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f347fc4b6ff34671aea1cfb46419afd3b28f8f7efc6d2ccb606ecc323e761720\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-15T13:09:43Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.38\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-15T13:09:41Z\"\n }\n}\n" STEP: replace the image in the pod May 15 13:09:46.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7113' May 15 13:09:46.533: INFO: stderr: "" May 15 13:09:46.533: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 15 13:09:46.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7113' May 15 13:09:50.883: INFO: stderr: "" May 15 13:09:50.883: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:09:50.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7113" for this suite. May 15 13:09:56.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:09:56.968: INFO: namespace kubectl-7113 deletion completed in 6.082178397s • [SLOW TEST:16.102 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:09:56.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 15 13:10:05.112: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 13:10:05.136: INFO: Pod pod-with-prestop-exec-hook still exists May 15 13:10:07.136: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 13:10:07.141: INFO: Pod pod-with-prestop-exec-hook still exists May 15 13:10:09.136: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 13:10:09.149: INFO: Pod pod-with-prestop-exec-hook still exists May 15 13:10:11.136: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 13:10:11.140: INFO: Pod pod-with-prestop-exec-hook still exists May 15 13:10:13.136: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 13:10:13.140: INFO: Pod pod-with-prestop-exec-hook still exists May 15 13:10:15.136: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 13:10:15.140: INFO: Pod pod-with-prestop-exec-hook still exists May 15 13:10:17.136: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 13:10:17.140: INFO: Pod pod-with-prestop-exec-hook still exists May 15 13:10:19.136: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 13:10:19.151: INFO: Pod pod-with-prestop-exec-hook still exists May 15 13:10:21.136: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 13:10:21.141: INFO: Pod pod-with-prestop-exec-hook still exists May 15 13:10:23.136: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 13:10:23.146: INFO: Pod pod-with-prestop-exec-hook still exists May 15 13:10:25.136: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 13:10:25.140: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:10:25.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8414" for this suite. May 15 13:10:47.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:10:47.240: INFO: namespace container-lifecycle-hook-8414 deletion completed in 22.091407487s • [SLOW TEST:50.271 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:10:47.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-421 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-421 to expose endpoints map[] May 15 13:10:47.424: INFO: Get endpoints failed (15.250893ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 15 13:10:48.427: INFO: successfully validated that service multi-endpoint-test in namespace services-421 exposes endpoints map[] (1.018847472s elapsed) STEP: Creating pod pod1 in namespace services-421 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-421 to expose endpoints map[pod1:[100]] May 15 13:10:52.490: INFO: successfully validated that service multi-endpoint-test in namespace services-421 exposes endpoints map[pod1:[100]] (4.05557796s elapsed) STEP: Creating pod pod2 in namespace services-421 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-421 to expose endpoints map[pod1:[100] pod2:[101]] May 15 13:10:56.783: INFO: successfully validated that service multi-endpoint-test in namespace services-421 exposes endpoints map[pod1:[100] pod2:[101]] (4.28938338s elapsed) STEP: Deleting pod pod1 in namespace services-421 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-421 to expose endpoints map[pod2:[101]] May 15 13:10:57.862: INFO: successfully validated that service multi-endpoint-test in namespace services-421 exposes endpoints map[pod2:[101]] (1.074917813s elapsed) STEP: Deleting pod pod2 in namespace services-421 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-421 to expose endpoints map[] May 15 13:10:58.932: INFO: successfully validated that service multi-endpoint-test in namespace services-421 exposes endpoints map[] (1.064955836s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:10:58.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-421" for this suite. May 15 13:11:20.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:11:21.075: INFO: namespace services-421 deletion completed in 22.116088534s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:33.835 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:11:21.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:11:52.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7800" for this suite. May 15 13:11:58.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:11:58.675: INFO: namespace container-runtime-7800 deletion completed in 6.394118225s • [SLOW TEST:37.600 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:11:58.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 13:11:58.774: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 15 13:12:03.779: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 15 13:12:03.779: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 15 13:12:03.828: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-1907,SelfLink:/apis/apps/v1/namespaces/deployment-1907/deployments/test-cleanup-deployment,UID:6c6a2f6c-36ef-423d-ac4e-36c145d76d02,ResourceVersion:11035362,Generation:1,CreationTimestamp:2020-05-15 13:12:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 15 13:12:03.834: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-1907,SelfLink:/apis/apps/v1/namespaces/deployment-1907/replicasets/test-cleanup-deployment-55bbcbc84c,UID:ec57608d-8482-44a6-a836-a65ae52454d7,ResourceVersion:11035364,Generation:1,CreationTimestamp:2020-05-15 13:12:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6c6a2f6c-36ef-423d-ac4e-36c145d76d02 0xc003193737 0xc003193738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 15 13:12:03.834: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 15 13:12:03.834: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-1907,SelfLink:/apis/apps/v1/namespaces/deployment-1907/replicasets/test-cleanup-controller,UID:95ce01c5-05d6-48dd-a13b-006c52491c4c,ResourceVersion:11035363,Generation:1,CreationTimestamp:2020-05-15 13:11:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6c6a2f6c-36ef-423d-ac4e-36c145d76d02 0xc003193667 0xc003193668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 15 13:12:03.843: INFO: Pod "test-cleanup-controller-vfckh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-vfckh,GenerateName:test-cleanup-controller-,Namespace:deployment-1907,SelfLink:/api/v1/namespaces/deployment-1907/pods/test-cleanup-controller-vfckh,UID:c789bbca-0c7a-4878-b510-eba35195dd74,ResourceVersion:11035357,Generation:0,CreationTimestamp:2020-05-15 13:11:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 95ce01c5-05d6-48dd-a13b-006c52491c4c 0xc00296f907 0xc00296f908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bdpzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bdpzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bdpzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00296f980} {node.kubernetes.io/unreachable Exists NoExecute 0xc00296f9a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:11:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:12:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:12:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:11:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.43,StartTime:2020-05-15 13:11:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-15 13:12:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f15c8d07b0ae1f492950d25c7b9550d9187647b89ecf9ebc8d80259f12714ccb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 13:12:03.843: INFO: Pod "test-cleanup-deployment-55bbcbc84c-vl2r9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-vl2r9,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-1907,SelfLink:/api/v1/namespaces/deployment-1907/pods/test-cleanup-deployment-55bbcbc84c-vl2r9,UID:20ed15f8-6afe-436e-8330-ccd74029c934,ResourceVersion:11035370,Generation:0,CreationTimestamp:2020-05-15 13:12:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c ec57608d-8482-44a6-a836-a65ae52454d7 0xc00296fa87 0xc00296fa88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bdpzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bdpzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-bdpzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00296fb00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00296fb20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:12:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:12:03.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1907" for this suite. May 15 13:12:09.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:12:10.086: INFO: namespace deployment-1907 deletion completed in 6.204574012s • [SLOW TEST:11.410 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:12:10.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 15 13:12:10.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-67' May 15 13:12:10.311: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 15 13:12:10.311: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 15 13:12:12.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-67' May 15 13:12:12.473: INFO: stderr: "" May 15 13:12:12.473: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:12:12.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-67" for this suite. May 15 13:13:34.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:13:34.553: INFO: namespace kubectl-67 deletion completed in 1m22.076584581s • [SLOW TEST:84.467 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:13:34.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 15 13:13:39.287: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9371 pod-service-account-d3ccca87-9528-4ced-b39c-aa99b69dbfd9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 15 13:13:42.214: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9371 pod-service-account-d3ccca87-9528-4ced-b39c-aa99b69dbfd9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 15 13:13:42.431: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9371 pod-service-account-d3ccca87-9528-4ced-b39c-aa99b69dbfd9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:13:42.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9371" for this suite. May 15 13:13:48.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:13:48.736: INFO: namespace svcaccounts-9371 deletion completed in 6.093587962s • [SLOW TEST:14.183 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:13:48.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 15 13:13:56.850: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 13:13:56.858: INFO: Pod pod-with-poststart-exec-hook still exists May 15 13:13:58.858: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 13:13:58.862: INFO: Pod pod-with-poststart-exec-hook still exists May 15 13:14:00.858: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 13:14:00.863: INFO: Pod pod-with-poststart-exec-hook still exists May 15 13:14:02.858: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 13:14:02.862: INFO: Pod pod-with-poststart-exec-hook still exists May 15 13:14:04.858: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 13:14:04.862: INFO: Pod pod-with-poststart-exec-hook still exists May 15 13:14:06.858: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 13:14:06.862: INFO: Pod pod-with-poststart-exec-hook still exists May 15 13:14:08.858: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 13:14:08.863: INFO: Pod pod-with-poststart-exec-hook still exists May 15 13:14:10.858: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 13:14:10.861: INFO: Pod pod-with-poststart-exec-hook still exists May 15 13:14:12.858: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 13:14:12.863: INFO: Pod pod-with-poststart-exec-hook still exists May 15 13:14:14.858: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 13:14:14.862: INFO: Pod pod-with-poststart-exec-hook still exists May 15 13:14:16.858: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 13:14:16.863: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:14:16.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6456" for this suite. May 15 13:14:38.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:14:38.960: INFO: namespace container-lifecycle-hook-6456 deletion completed in 22.092922003s • [SLOW TEST:50.223 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:14:38.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-8659f914-6a9a-4aa0-aefd-7146ab754ef1 STEP: Creating a pod to test consume configMaps May 15 13:14:39.017: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6652894-162d-4086-9763-06605c9d78a4" in namespace "projected-8282" to be "success or failure" May 15 13:14:39.054: INFO: Pod "pod-projected-configmaps-f6652894-162d-4086-9763-06605c9d78a4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.953891ms May 15 13:14:41.059: INFO: Pod "pod-projected-configmaps-f6652894-162d-4086-9763-06605c9d78a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041607209s May 15 13:14:43.064: INFO: Pod "pod-projected-configmaps-f6652894-162d-4086-9763-06605c9d78a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046429461s STEP: Saw pod success May 15 13:14:43.064: INFO: Pod "pod-projected-configmaps-f6652894-162d-4086-9763-06605c9d78a4" satisfied condition "success or failure" May 15 13:14:43.068: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-f6652894-162d-4086-9763-06605c9d78a4 container projected-configmap-volume-test: STEP: delete the pod May 15 13:14:43.113: INFO: Waiting for pod pod-projected-configmaps-f6652894-162d-4086-9763-06605c9d78a4 to disappear May 15 13:14:43.135: INFO: Pod pod-projected-configmaps-f6652894-162d-4086-9763-06605c9d78a4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:14:43.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8282" for this suite. May 15 13:14:49.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:14:49.233: INFO: namespace projected-8282 deletion completed in 6.092370031s • [SLOW TEST:10.274 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:14:49.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 15 13:14:49.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4241' May 15 13:14:49.574: INFO: stderr: "" May 15 13:14:49.574: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 15 13:14:50.579: INFO: Selector matched 1 pods for map[app:redis] May 15 13:14:50.579: INFO: Found 0 / 1 May 15 13:14:51.641: INFO: Selector matched 1 pods for map[app:redis] May 15 13:14:51.641: INFO: Found 0 / 1 May 15 13:14:52.579: INFO: Selector matched 1 pods for map[app:redis] May 15 13:14:52.579: INFO: Found 0 / 1 May 15 13:14:53.578: INFO: Selector matched 1 pods for map[app:redis] May 15 13:14:53.578: INFO: Found 1 / 1 May 15 13:14:53.578: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 15 13:14:53.582: INFO: Selector matched 1 pods for map[app:redis] May 15 13:14:53.582: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 15 13:14:53.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-mq5qm --namespace=kubectl-4241 -p {"metadata":{"annotations":{"x":"y"}}}' May 15 13:14:53.705: INFO: stderr: "" May 15 13:14:53.705: INFO: stdout: "pod/redis-master-mq5qm patched\n" STEP: checking annotations May 15 13:14:53.710: INFO: Selector matched 1 pods for map[app:redis] May 15 13:14:53.710: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:14:53.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4241" for this suite. May 15 13:15:15.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:15:15.807: INFO: namespace kubectl-4241 deletion completed in 22.094851873s • [SLOW TEST:26.573 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:15:15.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 15 13:15:21.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-483154cf-f6b8-489d-9e37-9c157e673963 -c busybox-main-container --namespace=emptydir-9467 -- cat /usr/share/volumeshare/shareddata.txt' May 15 13:15:22.167: INFO: stderr: "I0515 13:15:22.069612 362 log.go:172] (0xc000990420) (0xc00066a960) Create stream\nI0515 13:15:22.069663 362 log.go:172] (0xc000990420) (0xc00066a960) Stream added, broadcasting: 1\nI0515 13:15:22.071496 362 log.go:172] (0xc000990420) Reply frame received for 1\nI0515 13:15:22.071528 362 log.go:172] (0xc000990420) (0xc000958000) Create stream\nI0515 13:15:22.071538 362 log.go:172] (0xc000990420) (0xc000958000) Stream added, broadcasting: 3\nI0515 13:15:22.072369 362 log.go:172] (0xc000990420) Reply frame received for 3\nI0515 13:15:22.072393 362 log.go:172] (0xc000990420) (0xc00066aa00) Create stream\nI0515 13:15:22.072401 362 log.go:172] (0xc000990420) (0xc00066aa00) Stream added, broadcasting: 5\nI0515 13:15:22.073088 362 log.go:172] (0xc000990420) Reply frame received for 5\nI0515 13:15:22.162424 362 log.go:172] (0xc000990420) Data frame received for 5\nI0515 13:15:22.162453 362 log.go:172] (0xc00066aa00) (5) Data frame handling\nI0515 13:15:22.162472 362 log.go:172] (0xc000990420) Data frame received for 3\nI0515 13:15:22.162484 362 log.go:172] (0xc000958000) (3) Data frame handling\nI0515 13:15:22.162494 362 log.go:172] (0xc000958000) (3) Data frame sent\nI0515 13:15:22.162498 362 log.go:172] (0xc000990420) Data frame received for 3\nI0515 13:15:22.162502 362 log.go:172] (0xc000958000) (3) Data frame handling\nI0515 13:15:22.163279 362 log.go:172] (0xc000990420) Data frame received for 1\nI0515 13:15:22.163298 362 log.go:172] (0xc00066a960) (1) Data frame handling\nI0515 13:15:22.163308 362 log.go:172] (0xc00066a960) (1) Data frame sent\nI0515 13:15:22.163317 362 log.go:172] (0xc000990420) (0xc00066a960) Stream removed, broadcasting: 1\nI0515 13:15:22.163326 362 log.go:172] (0xc000990420) Go away received\nI0515 13:15:22.163596 362 log.go:172] (0xc000990420) (0xc00066a960) Stream removed, broadcasting: 1\nI0515 13:15:22.163607 362 log.go:172] (0xc000990420) (0xc000958000) Stream removed, broadcasting: 3\nI0515 13:15:22.163611 362 log.go:172] (0xc000990420) (0xc00066aa00) Stream removed, broadcasting: 5\n" May 15 13:15:22.167: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:15:22.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9467" for this suite. May 15 13:15:28.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:15:28.285: INFO: namespace emptydir-9467 deletion completed in 6.114820921s • [SLOW TEST:12.478 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:15:28.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9956 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 15 13:15:28.375: INFO: Found 0 stateful pods, waiting for 3 May 15 13:15:38.391: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 15 13:15:38.391: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 15 13:15:38.391: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 15 13:15:48.379: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 15 13:15:48.379: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 15 13:15:48.379: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 15 13:15:48.403: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 15 13:15:58.526: INFO: Updating stateful set ss2 May 15 13:15:58.591: INFO: Waiting for Pod statefulset-9956/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 15 13:16:08.600: INFO: Waiting for Pod statefulset-9956/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 15 13:16:18.760: INFO: Found 2 stateful pods, waiting for 3 May 15 13:16:28.764: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 15 13:16:28.764: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 15 13:16:28.764: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 15 13:16:28.788: INFO: Updating stateful set ss2 May 15 13:16:28.803: INFO: Waiting for Pod statefulset-9956/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 15 13:16:38.810: INFO: Waiting for Pod statefulset-9956/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 15 13:16:48.825: INFO: Updating stateful set ss2 May 15 13:16:48.990: INFO: Waiting for StatefulSet statefulset-9956/ss2 to complete update May 15 13:16:48.990: INFO: Waiting for Pod statefulset-9956/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 15 13:16:58.999: INFO: Waiting for StatefulSet statefulset-9956/ss2 to complete update May 15 13:16:58.999: INFO: Waiting for Pod statefulset-9956/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 15 13:17:08.999: INFO: Deleting all statefulset in ns statefulset-9956 May 15 13:17:09.003: INFO: Scaling statefulset ss2 to 0 May 15 13:17:39.023: INFO: Waiting for statefulset status.replicas updated to 0 May 15 13:17:39.025: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:17:39.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9956" for this suite. May 15 13:17:47.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:17:47.126: INFO: namespace statefulset-9956 deletion completed in 8.082671789s • [SLOW TEST:138.840 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:17:47.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 15 13:17:47.195: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:17:47.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7078" for this suite. May 15 13:17:53.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:17:53.361: INFO: namespace kubectl-7078 deletion completed in 6.067598845s • [SLOW TEST:6.235 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:17:53.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0515 13:18:34.683505 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 13:18:34.683: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:18:34.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9338" for this suite. May 15 13:18:44.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:18:44.804: INFO: namespace gc-9338 deletion completed in 10.117376763s • [SLOW TEST:51.443 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:18:44.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 15 13:18:44.872: INFO: PodSpec: initContainers in spec.initContainers May 15 13:19:44.263: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7e3bbc52-9f53-4b4b-8170-2e2b4d7e8cf2", GenerateName:"", Namespace:"init-container-8703", SelfLink:"/api/v1/namespaces/init-container-8703/pods/pod-init-7e3bbc52-9f53-4b4b-8170-2e2b4d7e8cf2", UID:"ca6bcdee-70ce-4e2f-a3e7-50c0f1f5854f", ResourceVersion:"11036981", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725145524, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"872551512"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wn77x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0030e0300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wn77x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wn77x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wn77x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f6cfc8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002914000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f6d050)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f6d070)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f6d078), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f6d07c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725145525, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725145525, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725145525, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725145524, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.230", StartTime:(*v1.Time)(0xc00183f280), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a70770)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a707e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://155eb924f5fd1863ccffccb1258ee26a46f6667f72072bc90f85321c7728d59e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00183f300), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00183f2e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:19:44.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8703" for this suite. May 15 13:20:06.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:20:06.464: INFO: namespace init-container-8703 deletion completed in 22.159494423s • [SLOW TEST:81.659 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:20:06.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-902f81c9-b5ce-4820-9d40-938689f119da STEP: Creating a pod to test consume configMaps May 15 13:20:06.543: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf090800-e6cf-456c-9179-0da922499c87" in namespace "configmap-9364" to be "success or failure" May 15 13:20:06.546: INFO: Pod "pod-configmaps-bf090800-e6cf-456c-9179-0da922499c87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.558874ms May 15 13:20:08.550: INFO: Pod "pod-configmaps-bf090800-e6cf-456c-9179-0da922499c87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006028791s May 15 13:20:10.553: INFO: Pod "pod-configmaps-bf090800-e6cf-456c-9179-0da922499c87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009832553s May 15 13:20:12.556: INFO: Pod "pod-configmaps-bf090800-e6cf-456c-9179-0da922499c87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01251404s STEP: Saw pod success May 15 13:20:12.556: INFO: Pod "pod-configmaps-bf090800-e6cf-456c-9179-0da922499c87" satisfied condition "success or failure" May 15 13:20:12.558: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-bf090800-e6cf-456c-9179-0da922499c87 container configmap-volume-test: STEP: delete the pod May 15 13:20:12.578: INFO: Waiting for pod pod-configmaps-bf090800-e6cf-456c-9179-0da922499c87 to disappear May 15 13:20:12.582: INFO: Pod pod-configmaps-bf090800-e6cf-456c-9179-0da922499c87 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:20:12.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9364" for this suite. May 15 13:20:18.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:20:18.716: INFO: namespace configmap-9364 deletion completed in 6.13124865s • [SLOW TEST:12.251 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:20:18.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-675e66cb-f242-4502-85ae-d241648460d1 STEP: Creating a pod to test consume secrets May 15 13:20:18.782: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f9df7eb9-cf49-4616-b67f-df630198b162" in namespace "projected-4571" to be "success or failure" May 15 13:20:18.799: INFO: Pod "pod-projected-secrets-f9df7eb9-cf49-4616-b67f-df630198b162": Phase="Pending", Reason="", readiness=false. Elapsed: 16.91316ms May 15 13:20:20.804: INFO: Pod "pod-projected-secrets-f9df7eb9-cf49-4616-b67f-df630198b162": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021670377s May 15 13:20:22.808: INFO: Pod "pod-projected-secrets-f9df7eb9-cf49-4616-b67f-df630198b162": Phase="Running", Reason="", readiness=true. Elapsed: 4.025827576s May 15 13:20:24.812: INFO: Pod "pod-projected-secrets-f9df7eb9-cf49-4616-b67f-df630198b162": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029525708s STEP: Saw pod success May 15 13:20:24.812: INFO: Pod "pod-projected-secrets-f9df7eb9-cf49-4616-b67f-df630198b162" satisfied condition "success or failure" May 15 13:20:24.815: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-f9df7eb9-cf49-4616-b67f-df630198b162 container projected-secret-volume-test: STEP: delete the pod May 15 13:20:24.850: INFO: Waiting for pod pod-projected-secrets-f9df7eb9-cf49-4616-b67f-df630198b162 to disappear May 15 13:20:24.876: INFO: Pod pod-projected-secrets-f9df7eb9-cf49-4616-b67f-df630198b162 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:20:24.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4571" for this suite. May 15 13:20:30.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:20:30.967: INFO: namespace projected-4571 deletion completed in 6.088811887s • [SLOW TEST:12.251 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:20:30.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6890.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6890.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 13:20:37.058: INFO: DNS probes using dns-6890/dns-test-f91a1f18-15a9-411e-8f3f-2bb28d34551c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:20:37.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6890" for this suite. May 15 13:20:43.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:20:43.251: INFO: namespace dns-6890 deletion completed in 6.151682343s • [SLOW TEST:12.283 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:20:43.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:20:43.310: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbe11f5f-8f5d-4202-8aee-e899aad2b9e6" in namespace "downward-api-8802" to be "success or failure" May 15 13:20:43.325: INFO: Pod "downwardapi-volume-bbe11f5f-8f5d-4202-8aee-e899aad2b9e6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.373576ms May 15 13:20:45.330: INFO: Pod "downwardapi-volume-bbe11f5f-8f5d-4202-8aee-e899aad2b9e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020078531s May 15 13:20:47.335: INFO: Pod "downwardapi-volume-bbe11f5f-8f5d-4202-8aee-e899aad2b9e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024433152s STEP: Saw pod success May 15 13:20:47.335: INFO: Pod "downwardapi-volume-bbe11f5f-8f5d-4202-8aee-e899aad2b9e6" satisfied condition "success or failure" May 15 13:20:47.338: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-bbe11f5f-8f5d-4202-8aee-e899aad2b9e6 container client-container: STEP: delete the pod May 15 13:20:47.376: INFO: Waiting for pod downwardapi-volume-bbe11f5f-8f5d-4202-8aee-e899aad2b9e6 to disappear May 15 13:20:47.408: INFO: Pod downwardapi-volume-bbe11f5f-8f5d-4202-8aee-e899aad2b9e6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:20:47.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8802" for this suite. May 15 13:20:53.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:20:53.504: INFO: namespace downward-api-8802 deletion completed in 6.092630585s • [SLOW TEST:10.253 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:20:53.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 15 13:20:53.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5258' May 15 13:20:53.854: INFO: stderr: "" May 15 13:20:53.854: INFO: stdout: "pod/pause created\n" May 15 13:20:53.854: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 15 13:20:53.854: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5258" to be "running and ready" May 15 13:20:53.874: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 19.836763ms May 15 13:20:55.878: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023382443s May 15 13:20:57.882: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.027774453s May 15 13:20:57.882: INFO: Pod "pause" satisfied condition "running and ready" May 15 13:20:57.882: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 15 13:20:57.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5258' May 15 13:20:57.994: INFO: stderr: "" May 15 13:20:57.994: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 15 13:20:57.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5258' May 15 13:20:58.094: INFO: stderr: "" May 15 13:20:58.094: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 15 13:20:58.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5258' May 15 13:20:58.193: INFO: stderr: "" May 15 13:20:58.193: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 15 13:20:58.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5258' May 15 13:20:58.289: INFO: stderr: "" May 15 13:20:58.289: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 15 13:20:58.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5258' May 15 13:20:58.400: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 13:20:58.400: INFO: stdout: "pod \"pause\" force deleted\n" May 15 13:20:58.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5258' May 15 13:20:58.516: INFO: stderr: "No resources found.\n" May 15 13:20:58.516: INFO: stdout: "" May 15 13:20:58.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5258 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 13:20:58.610: INFO: stderr: "" May 15 13:20:58.610: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:20:58.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5258" for this suite. May 15 13:21:04.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:21:05.057: INFO: namespace kubectl-5258 deletion completed in 6.444006288s • [SLOW TEST:11.553 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:21:05.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 15 13:21:05.201: INFO: Waiting up to 5m0s for pod "downward-api-8eb58b15-40f4-4d10-96ec-43fe67016abe" in namespace "downward-api-4605" to be "success or failure" May 15 13:21:05.204: INFO: Pod "downward-api-8eb58b15-40f4-4d10-96ec-43fe67016abe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.609538ms May 15 13:21:07.208: INFO: Pod "downward-api-8eb58b15-40f4-4d10-96ec-43fe67016abe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007496861s May 15 13:21:09.213: INFO: Pod "downward-api-8eb58b15-40f4-4d10-96ec-43fe67016abe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011665154s STEP: Saw pod success May 15 13:21:09.213: INFO: Pod "downward-api-8eb58b15-40f4-4d10-96ec-43fe67016abe" satisfied condition "success or failure" May 15 13:21:09.215: INFO: Trying to get logs from node iruya-worker2 pod downward-api-8eb58b15-40f4-4d10-96ec-43fe67016abe container dapi-container: STEP: delete the pod May 15 13:21:09.244: INFO: Waiting for pod downward-api-8eb58b15-40f4-4d10-96ec-43fe67016abe to disappear May 15 13:21:09.252: INFO: Pod downward-api-8eb58b15-40f4-4d10-96ec-43fe67016abe no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:21:09.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4605" for this suite. May 15 13:21:15.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:21:15.323: INFO: namespace downward-api-4605 deletion completed in 6.068046929s • [SLOW TEST:10.266 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:21:15.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-b64861dd-347b-49b8-8036-c9ba8c855ae6 STEP: Creating a pod to test consume secrets May 15 13:21:15.435: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-19000e66-6b37-4ae5-b8d1-62a2791c695c" in namespace "projected-3334" to be "success or failure" May 15 13:21:15.471: INFO: Pod "pod-projected-secrets-19000e66-6b37-4ae5-b8d1-62a2791c695c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.344633ms May 15 13:21:17.512: INFO: Pod "pod-projected-secrets-19000e66-6b37-4ae5-b8d1-62a2791c695c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076928812s May 15 13:21:19.608: INFO: Pod "pod-projected-secrets-19000e66-6b37-4ae5-b8d1-62a2791c695c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.173310931s STEP: Saw pod success May 15 13:21:19.608: INFO: Pod "pod-projected-secrets-19000e66-6b37-4ae5-b8d1-62a2791c695c" satisfied condition "success or failure" May 15 13:21:19.611: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-19000e66-6b37-4ae5-b8d1-62a2791c695c container projected-secret-volume-test: STEP: delete the pod May 15 13:21:19.634: INFO: Waiting for pod pod-projected-secrets-19000e66-6b37-4ae5-b8d1-62a2791c695c to disappear May 15 13:21:19.644: INFO: Pod pod-projected-secrets-19000e66-6b37-4ae5-b8d1-62a2791c695c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:21:19.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3334" for this suite. May 15 13:21:25.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:21:25.748: INFO: namespace projected-3334 deletion completed in 6.100782366s • [SLOW TEST:10.425 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:21:25.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-25xdm in namespace proxy-5508 I0515 13:21:25.910182 6 runners.go:180] Created replication controller with name: proxy-service-25xdm, namespace: proxy-5508, replica count: 1 I0515 13:21:26.960620 6 runners.go:180] proxy-service-25xdm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 13:21:27.960850 6 runners.go:180] proxy-service-25xdm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 13:21:28.961091 6 runners.go:180] proxy-service-25xdm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 13:21:29.961448 6 runners.go:180] proxy-service-25xdm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 13:21:30.961631 6 runners.go:180] proxy-service-25xdm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 13:21:31.961835 6 runners.go:180] proxy-service-25xdm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 13:21:32.962039 6 runners.go:180] proxy-service-25xdm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 13:21:33.962279 6 runners.go:180] proxy-service-25xdm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 13:21:34.962512 6 runners.go:180] proxy-service-25xdm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 13:21:35.962726 6 runners.go:180] proxy-service-25xdm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 13:21:36.962893 6 runners.go:180] proxy-service-25xdm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 13:21:36.967: INFO: setup took 11.128741779s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 15 13:21:36.974: INFO: (0) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 6.865025ms) May 15 13:21:36.975: INFO: (0) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 7.540506ms) May 15 13:21:36.975: INFO: (0) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 7.646237ms) May 15 13:21:36.976: INFO: (0) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 8.175715ms) May 15 13:21:36.976: INFO: (0) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 8.529813ms) May 15 13:21:36.976: INFO: (0) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:1080/proxy/: test<... (200; 8.647411ms) May 15 13:21:36.977: INFO: (0) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:160/proxy/: foo (200; 9.245079ms) May 15 13:21:36.978: INFO: (0) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 10.209746ms) May 15 13:21:36.978: INFO: (0) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 10.767376ms) May 15 13:21:36.978: INFO: (0) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 10.521804ms) May 15 13:21:36.981: INFO: (0) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:460/proxy/: tls baz (200; 13.821277ms) May 15 13:21:36.981: INFO: (0) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 13.851029ms) May 15 13:21:36.982: INFO: (0) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: test<... (200; 5.847046ms) May 15 13:21:36.990: INFO: (1) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 5.78961ms) May 15 13:21:36.990: INFO: (1) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 5.782959ms) May 15 13:21:36.990: INFO: (1) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 5.78951ms) May 15 13:21:36.990: INFO: (1) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 5.867366ms) May 15 13:21:36.990: INFO: (1) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 6.010077ms) May 15 13:21:36.990: INFO: (1) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 6.12498ms) May 15 13:21:36.990: INFO: (1) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: test<... (200; 4.898509ms) May 15 13:21:36.996: INFO: (2) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 5.193277ms) May 15 13:21:36.996: INFO: (2) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 5.559717ms) May 15 13:21:36.996: INFO: (2) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:160/proxy/: foo (200; 5.66091ms) May 15 13:21:36.996: INFO: (2) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 5.683835ms) May 15 13:21:36.996: INFO: (2) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 5.660218ms) May 15 13:21:36.996: INFO: (2) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: ... (200; 7.571116ms) May 15 13:21:37.001: INFO: (3) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: test<... (200; 4.211878ms) May 15 13:21:37.003: INFO: (3) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.529284ms) May 15 13:21:37.003: INFO: (3) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 4.703461ms) May 15 13:21:37.003: INFO: (3) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:160/proxy/: foo (200; 4.774138ms) May 15 13:21:37.003: INFO: (3) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.877981ms) May 15 13:21:37.003: INFO: (3) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 4.972824ms) May 15 13:21:37.003: INFO: (3) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 4.941517ms) May 15 13:21:37.003: INFO: (3) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 4.996663ms) May 15 13:21:37.004: INFO: (3) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 5.397613ms) May 15 13:21:37.004: INFO: (3) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 5.553658ms) May 15 13:21:37.004: INFO: (3) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 5.594601ms) May 15 13:21:37.004: INFO: (3) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 5.640995ms) May 15 13:21:37.006: INFO: (4) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 2.361949ms) May 15 13:21:37.008: INFO: (4) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:460/proxy/: tls baz (200; 3.864671ms) May 15 13:21:37.008: INFO: (4) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: ... (200; 4.325745ms) May 15 13:21:37.008: INFO: (4) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 4.378043ms) May 15 13:21:37.008: INFO: (4) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:1080/proxy/: test<... (200; 4.331638ms) May 15 13:21:37.008: INFO: (4) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 4.442255ms) May 15 13:21:37.009: INFO: (4) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 4.611525ms) May 15 13:21:37.010: INFO: (4) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 6.330588ms) May 15 13:21:37.010: INFO: (4) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 6.336096ms) May 15 13:21:37.010: INFO: (4) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 6.267319ms) May 15 13:21:37.010: INFO: (4) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 6.272435ms) May 15 13:21:37.010: INFO: (4) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 6.367849ms) May 15 13:21:37.010: INFO: (4) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 6.30663ms) May 15 13:21:37.014: INFO: (5) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 3.519167ms) May 15 13:21:37.014: INFO: (5) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 3.911313ms) May 15 13:21:37.015: INFO: (5) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 4.144986ms) May 15 13:21:37.015: INFO: (5) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:160/proxy/: foo (200; 4.280442ms) May 15 13:21:37.015: INFO: (5) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:1080/proxy/: test<... (200; 4.298629ms) May 15 13:21:37.015: INFO: (5) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 4.385852ms) May 15 13:21:37.015: INFO: (5) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:460/proxy/: tls baz (200; 4.501091ms) May 15 13:21:37.015: INFO: (5) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 4.520134ms) May 15 13:21:37.015: INFO: (5) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.536139ms) May 15 13:21:37.015: INFO: (5) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.669657ms) May 15 13:21:37.015: INFO: (5) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 4.883572ms) May 15 13:21:37.015: INFO: (5) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: test<... (200; 3.476442ms) May 15 13:21:37.021: INFO: (6) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 4.141406ms) May 15 13:21:37.021: INFO: (6) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.214664ms) May 15 13:21:37.021: INFO: (6) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 4.408754ms) May 15 13:21:37.021: INFO: (6) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 4.433385ms) May 15 13:21:37.021: INFO: (6) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 4.502343ms) May 15 13:21:37.021: INFO: (6) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 4.550519ms) May 15 13:21:37.021: INFO: (6) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 4.463923ms) May 15 13:21:37.022: INFO: (6) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 4.643041ms) May 15 13:21:37.022: INFO: (6) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: ... (200; 4.715735ms) May 15 13:21:37.022: INFO: (6) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 4.76475ms) May 15 13:21:37.022: INFO: (6) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.881849ms) May 15 13:21:37.022: INFO: (6) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 4.886811ms) May 15 13:21:37.025: INFO: (7) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 2.792066ms) May 15 13:21:37.025: INFO: (7) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:160/proxy/: foo (200; 3.026326ms) May 15 13:21:37.025: INFO: (7) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 3.149722ms) May 15 13:21:37.025: INFO: (7) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 3.244986ms) May 15 13:21:37.027: INFO: (7) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 4.087579ms) May 15 13:21:37.027: INFO: (7) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:1080/proxy/: test<... (200; 4.602755ms) May 15 13:21:37.027: INFO: (7) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 4.453131ms) May 15 13:21:37.027: INFO: (7) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 4.484487ms) May 15 13:21:37.027: INFO: (7) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 5.004176ms) May 15 13:21:37.027: INFO: (7) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 5.170789ms) May 15 13:21:37.027: INFO: (7) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:460/proxy/: tls baz (200; 5.123854ms) May 15 13:21:37.028: INFO: (7) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 5.483832ms) May 15 13:21:37.028: INFO: (7) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 4.841936ms) May 15 13:21:37.028: INFO: (7) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 5.607867ms) May 15 13:21:37.028: INFO: (7) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: test<... (200; 2.816212ms) May 15 13:21:37.032: INFO: (8) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 3.368572ms) May 15 13:21:37.032: INFO: (8) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 3.845467ms) May 15 13:21:37.032: INFO: (8) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 3.901242ms) May 15 13:21:37.032: INFO: (8) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 3.961813ms) May 15 13:21:37.032: INFO: (8) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 3.934704ms) May 15 13:21:37.032: INFO: (8) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 4.100186ms) May 15 13:21:37.032: INFO: (8) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 4.064082ms) May 15 13:21:37.032: INFO: (8) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:160/proxy/: foo (200; 4.04083ms) May 15 13:21:37.032: INFO: (8) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 4.106016ms) May 15 13:21:37.032: INFO: (8) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: ... (200; 4.181485ms) May 15 13:21:37.037: INFO: (9) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: test<... (200; 4.400088ms) May 15 13:21:37.038: INFO: (9) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 4.41742ms) May 15 13:21:37.038: INFO: (9) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:460/proxy/: tls baz (200; 4.445056ms) May 15 13:21:37.038: INFO: (9) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 4.429646ms) May 15 13:21:37.038: INFO: (9) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 4.561421ms) May 15 13:21:37.038: INFO: (9) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 4.637405ms) May 15 13:21:37.038: INFO: (9) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 4.653974ms) May 15 13:21:37.038: INFO: (9) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 4.633582ms) May 15 13:21:37.038: INFO: (9) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.641039ms) May 15 13:21:37.038: INFO: (9) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 4.840097ms) May 15 13:21:37.038: INFO: (9) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 4.916534ms) May 15 13:21:37.040: INFO: (10) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:460/proxy/: tls baz (200; 2.10033ms) May 15 13:21:37.042: INFO: (10) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 4.224501ms) May 15 13:21:37.042: INFO: (10) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 4.141031ms) May 15 13:21:37.043: INFO: (10) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.437019ms) May 15 13:21:37.043: INFO: (10) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:1080/proxy/: test<... (200; 4.522632ms) May 15 13:21:37.043: INFO: (10) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 4.385136ms) May 15 13:21:37.043: INFO: (10) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 4.470138ms) May 15 13:21:37.043: INFO: (10) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 4.752031ms) May 15 13:21:37.043: INFO: (10) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.611841ms) May 15 13:21:37.043: INFO: (10) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 4.746313ms) May 15 13:21:37.043: INFO: (10) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 4.662245ms) May 15 13:21:37.043: INFO: (10) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 4.874043ms) May 15 13:21:37.043: INFO: (10) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 5.026249ms) May 15 13:21:37.043: INFO: (10) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: test<... (200; 4.146502ms) May 15 13:21:37.048: INFO: (11) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.36019ms) May 15 13:21:37.049: INFO: (11) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 5.564181ms) May 15 13:21:37.049: INFO: (11) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 5.487622ms) May 15 13:21:37.049: INFO: (11) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 5.496527ms) May 15 13:21:37.049: INFO: (11) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: ... (200; 3.71859ms) May 15 13:21:37.054: INFO: (12) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 4.240525ms) May 15 13:21:37.054: INFO: (12) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: test<... (200; 4.726441ms) May 15 13:21:37.055: INFO: (12) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 4.896936ms) May 15 13:21:37.055: INFO: (12) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 4.842996ms) May 15 13:21:37.055: INFO: (12) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 5.140301ms) May 15 13:21:37.055: INFO: (12) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 5.187193ms) May 15 13:21:37.055: INFO: (12) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 5.236858ms) May 15 13:21:37.056: INFO: (12) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 5.494467ms) May 15 13:21:37.058: INFO: (13) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:460/proxy/: tls baz (200; 1.991829ms) May 15 13:21:37.058: INFO: (13) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 2.785602ms) May 15 13:21:37.060: INFO: (13) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 4.425027ms) May 15 13:21:37.060: INFO: (13) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.461011ms) May 15 13:21:37.060: INFO: (13) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 4.544916ms) May 15 13:21:37.060: INFO: (13) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.553224ms) May 15 13:21:37.060: INFO: (13) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:1080/proxy/: test<... (200; 4.797547ms) May 15 13:21:37.061: INFO: (13) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 5.108287ms) May 15 13:21:37.061: INFO: (13) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: ... (200; 4.471016ms) May 15 13:21:37.066: INFO: (14) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:1080/proxy/: test<... (200; 4.62893ms) May 15 13:21:37.066: INFO: (14) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 4.486297ms) May 15 13:21:37.066: INFO: (14) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:160/proxy/: foo (200; 4.662621ms) May 15 13:21:37.066: INFO: (14) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: test (200; 5.052862ms) May 15 13:21:37.066: INFO: (14) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 5.101885ms) May 15 13:21:37.067: INFO: (14) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 5.673114ms) May 15 13:21:37.067: INFO: (14) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 5.8015ms) May 15 13:21:37.071: INFO: (15) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 2.351132ms) May 15 13:21:37.071: INFO: (15) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: test<... (200; 4.721258ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 4.662519ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 5.450432ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:160/proxy/: foo (200; 5.708275ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 5.978605ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 4.925265ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 5.611496ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 5.290796ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 5.91082ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:460/proxy/: tls baz (200; 5.086865ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 5.437178ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 4.265711ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 4.092994ms) May 15 13:21:37.073: INFO: (15) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 5.31598ms) May 15 13:21:37.075: INFO: (16) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 2.029945ms) May 15 13:21:37.076: INFO: (16) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 2.144046ms) May 15 13:21:37.076: INFO: (16) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:160/proxy/: foo (200; 2.199017ms) May 15 13:21:37.076: INFO: (16) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 2.394366ms) May 15 13:21:37.078: INFO: (16) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:1080/proxy/: test<... (200; 3.064483ms) May 15 13:21:37.078: INFO: (16) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 3.886325ms) May 15 13:21:37.078: INFO: (16) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 4.391521ms) May 15 13:21:37.078: INFO: (16) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 3.647002ms) May 15 13:21:37.078: INFO: (16) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 3.531682ms) May 15 13:21:37.078: INFO: (16) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 3.719244ms) May 15 13:21:37.078: INFO: (16) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 4.177984ms) May 15 13:21:37.078: INFO: (16) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 4.149405ms) May 15 13:21:37.078: INFO: (16) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:460/proxy/: tls baz (200; 3.963511ms) May 15 13:21:37.078: INFO: (16) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.451857ms) May 15 13:21:37.078: INFO: (16) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: ... (200; 4.556915ms) May 15 13:21:37.083: INFO: (17) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 4.698189ms) May 15 13:21:37.083: INFO: (17) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 4.777365ms) May 15 13:21:37.083: INFO: (17) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.825593ms) May 15 13:21:37.083: INFO: (17) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 4.811546ms) May 15 13:21:37.083: INFO: (17) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:1080/proxy/: test<... (200; 4.953322ms) May 15 13:21:37.083: INFO: (17) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 4.882569ms) May 15 13:21:37.088: INFO: (18) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 4.508054ms) May 15 13:21:37.088: INFO: (18) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.659957ms) May 15 13:21:37.088: INFO: (18) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 4.895117ms) May 15 13:21:37.088: INFO: (18) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 4.970583ms) May 15 13:21:37.088: INFO: (18) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 4.986882ms) May 15 13:21:37.088: INFO: (18) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 5.030357ms) May 15 13:21:37.089: INFO: (18) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 5.452427ms) May 15 13:21:37.089: INFO: (18) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 5.511809ms) May 15 13:21:37.089: INFO: (18) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 5.490626ms) May 15 13:21:37.089: INFO: (18) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:160/proxy/: foo (200; 5.577125ms) May 15 13:21:37.089: INFO: (18) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 5.64213ms) May 15 13:21:37.089: INFO: (18) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:460/proxy/: tls baz (200; 5.629114ms) May 15 13:21:37.089: INFO: (18) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:1080/proxy/: test<... (200; 5.59139ms) May 15 13:21:37.089: INFO: (18) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d/proxy/: test (200; 5.673021ms) May 15 13:21:37.089: INFO: (18) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 5.634371ms) May 15 13:21:37.089: INFO: (18) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:443/proxy/: test (200; 4.072229ms) May 15 13:21:37.093: INFO: (19) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname2/proxy/: bar (200; 4.210467ms) May 15 13:21:37.093: INFO: (19) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname1/proxy/: tls baz (200; 4.187168ms) May 15 13:21:37.093: INFO: (19) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:460/proxy/: tls baz (200; 4.311104ms) May 15 13:21:37.093: INFO: (19) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:162/proxy/: bar (200; 4.26857ms) May 15 13:21:37.093: INFO: (19) /api/v1/namespaces/proxy-5508/pods/https:proxy-service-25xdm-hs27d:462/proxy/: tls qux (200; 4.292148ms) May 15 13:21:37.093: INFO: (19) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:160/proxy/: foo (200; 4.243808ms) May 15 13:21:37.094: INFO: (19) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:1080/proxy/: ... (200; 4.325089ms) May 15 13:21:37.094: INFO: (19) /api/v1/namespaces/proxy-5508/pods/proxy-service-25xdm-hs27d:1080/proxy/: test<... (200; 4.4692ms) May 15 13:21:37.094: INFO: (19) /api/v1/namespaces/proxy-5508/pods/http:proxy-service-25xdm-hs27d:160/proxy/: foo (200; 4.472788ms) May 15 13:21:37.094: INFO: (19) /api/v1/namespaces/proxy-5508/services/proxy-service-25xdm:portname1/proxy/: foo (200; 4.588528ms) May 15 13:21:37.094: INFO: (19) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname1/proxy/: foo (200; 4.65447ms) May 15 13:21:37.094: INFO: (19) /api/v1/namespaces/proxy-5508/services/http:proxy-service-25xdm:portname2/proxy/: bar (200; 4.726581ms) May 15 13:21:37.094: INFO: (19) /api/v1/namespaces/proxy-5508/services/https:proxy-service-25xdm:tlsportname2/proxy/: tls qux (200; 4.862917ms) STEP: deleting ReplicationController proxy-service-25xdm in namespace proxy-5508, will wait for the garbage collector to delete the pods May 15 13:21:37.151: INFO: Deleting ReplicationController proxy-service-25xdm took: 5.854747ms May 15 13:21:37.452: INFO: Terminating ReplicationController proxy-service-25xdm pods took: 300.260252ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:21:41.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5508" for this suite. May 15 13:21:48.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:21:48.084: INFO: namespace proxy-5508 deletion completed in 6.115517264s • [SLOW TEST:22.336 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:21:48.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3777 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3777 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3777 May 15 13:21:48.200: INFO: Found 0 stateful pods, waiting for 1 May 15 13:21:58.206: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 15 13:21:58.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3777 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 15 13:21:58.443: INFO: stderr: "I0515 13:21:58.335113 557 log.go:172] (0xc00013adc0) (0xc00042e6e0) Create stream\nI0515 13:21:58.335178 557 log.go:172] (0xc00013adc0) (0xc00042e6e0) Stream added, broadcasting: 1\nI0515 13:21:58.338819 557 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0515 13:21:58.338875 557 log.go:172] (0xc00013adc0) (0xc000ad4000) Create stream\nI0515 13:21:58.338890 557 log.go:172] (0xc00013adc0) (0xc000ad4000) Stream added, broadcasting: 3\nI0515 13:21:58.339866 557 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0515 13:21:58.339899 557 log.go:172] (0xc00013adc0) (0xc000ad40a0) Create stream\nI0515 13:21:58.339917 557 log.go:172] (0xc00013adc0) (0xc000ad40a0) Stream added, broadcasting: 5\nI0515 13:21:58.340922 557 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0515 13:21:58.410303 557 log.go:172] (0xc00013adc0) Data frame received for 5\nI0515 13:21:58.410331 557 log.go:172] (0xc000ad40a0) (5) Data frame handling\nI0515 13:21:58.410347 557 log.go:172] (0xc000ad40a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0515 13:21:58.436883 557 log.go:172] (0xc00013adc0) Data frame received for 5\nI0515 13:21:58.436909 557 log.go:172] (0xc000ad40a0) (5) Data frame handling\nI0515 13:21:58.436937 557 log.go:172] (0xc00013adc0) Data frame received for 3\nI0515 13:21:58.436955 557 log.go:172] (0xc000ad4000) (3) Data frame handling\nI0515 13:21:58.436972 557 log.go:172] (0xc000ad4000) (3) Data frame sent\nI0515 13:21:58.436981 557 log.go:172] (0xc00013adc0) Data frame received for 3\nI0515 13:21:58.436987 557 log.go:172] (0xc000ad4000) (3) Data frame handling\nI0515 13:21:58.438749 557 log.go:172] (0xc00013adc0) Data frame received for 1\nI0515 13:21:58.438769 557 log.go:172] (0xc00042e6e0) (1) Data frame handling\nI0515 13:21:58.438779 557 log.go:172] (0xc00042e6e0) (1) Data frame sent\nI0515 13:21:58.438791 557 log.go:172] (0xc00013adc0) (0xc00042e6e0) Stream removed, broadcasting: 1\nI0515 13:21:58.438804 557 log.go:172] (0xc00013adc0) Go away received\nI0515 13:21:58.439246 557 log.go:172] (0xc00013adc0) (0xc00042e6e0) Stream removed, broadcasting: 1\nI0515 13:21:58.439276 557 log.go:172] (0xc00013adc0) (0xc000ad4000) Stream removed, broadcasting: 3\nI0515 13:21:58.439289 557 log.go:172] (0xc00013adc0) (0xc000ad40a0) Stream removed, broadcasting: 5\n" May 15 13:21:58.443: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 15 13:21:58.443: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 15 13:21:58.447: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 15 13:22:08.682: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 13:22:08.682: INFO: Waiting for statefulset status.replicas updated to 0 May 15 13:22:08.742: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999557s May 15 13:22:09.747: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.949778019s May 15 13:22:10.752: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.945615276s May 15 13:22:11.756: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.940578524s May 15 13:22:12.760: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.936077123s May 15 13:22:13.766: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.931657135s May 15 13:22:14.770: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.926422896s May 15 13:22:15.776: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.922079588s May 15 13:22:16.781: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.916446689s May 15 13:22:17.786: INFO: Verifying statefulset ss doesn't scale past 1 for another 911.077893ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3777 May 15 13:22:18.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3777 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 15 13:22:19.002: INFO: stderr: "I0515 13:22:18.910588 578 log.go:172] (0xc00092a420) (0xc0003e46e0) Create stream\nI0515 13:22:18.910635 578 log.go:172] (0xc00092a420) (0xc0003e46e0) Stream added, broadcasting: 1\nI0515 13:22:18.913076 578 log.go:172] (0xc00092a420) Reply frame received for 1\nI0515 13:22:18.913341 578 log.go:172] (0xc00092a420) (0xc00092e000) Create stream\nI0515 13:22:18.913388 578 log.go:172] (0xc00092a420) (0xc00092e000) Stream added, broadcasting: 3\nI0515 13:22:18.914403 578 log.go:172] (0xc00092a420) Reply frame received for 3\nI0515 13:22:18.914427 578 log.go:172] (0xc00092a420) (0xc00092e0a0) Create stream\nI0515 13:22:18.914434 578 log.go:172] (0xc00092a420) (0xc00092e0a0) Stream added, broadcasting: 5\nI0515 13:22:18.915205 578 log.go:172] (0xc00092a420) Reply frame received for 5\nI0515 13:22:18.996597 578 log.go:172] (0xc00092a420) Data frame received for 3\nI0515 13:22:18.996648 578 log.go:172] (0xc00092e000) (3) Data frame handling\nI0515 13:22:18.996664 578 log.go:172] (0xc00092e000) (3) Data frame sent\nI0515 13:22:18.996675 578 log.go:172] (0xc00092a420) Data frame received for 3\nI0515 13:22:18.996691 578 log.go:172] (0xc00092e000) (3) Data frame handling\nI0515 13:22:18.996711 578 log.go:172] (0xc00092a420) Data frame received for 5\nI0515 13:22:18.996724 578 log.go:172] (0xc00092e0a0) (5) Data frame handling\nI0515 13:22:18.996748 578 log.go:172] (0xc00092e0a0) (5) Data frame sent\nI0515 13:22:18.996762 578 log.go:172] (0xc00092a420) Data frame received for 5\nI0515 13:22:18.996771 578 log.go:172] (0xc00092e0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0515 13:22:18.998418 578 log.go:172] (0xc00092a420) Data frame received for 1\nI0515 13:22:18.998431 578 log.go:172] (0xc0003e46e0) (1) Data frame handling\nI0515 13:22:18.998443 578 log.go:172] (0xc0003e46e0) (1) Data frame sent\nI0515 13:22:18.998455 578 log.go:172] (0xc00092a420) (0xc0003e46e0) Stream removed, broadcasting: 1\nI0515 13:22:18.998507 578 log.go:172] (0xc00092a420) Go away received\nI0515 13:22:18.998658 578 log.go:172] (0xc00092a420) (0xc0003e46e0) Stream removed, broadcasting: 1\nI0515 13:22:18.998668 578 log.go:172] (0xc00092a420) (0xc00092e000) Stream removed, broadcasting: 3\nI0515 13:22:18.998673 578 log.go:172] (0xc00092a420) (0xc00092e0a0) Stream removed, broadcasting: 5\n" May 15 13:22:19.002: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 15 13:22:19.002: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 15 13:22:19.005: INFO: Found 1 stateful pods, waiting for 3 May 15 13:22:29.010: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 15 13:22:29.010: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 15 13:22:29.010: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 15 13:22:29.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3777 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 15 13:22:29.249: INFO: stderr: "I0515 13:22:29.142577 595 log.go:172] (0xc00083c370) (0xc000724780) Create stream\nI0515 13:22:29.142644 595 log.go:172] (0xc00083c370) (0xc000724780) Stream added, broadcasting: 1\nI0515 13:22:29.145090 595 log.go:172] (0xc00083c370) Reply frame received for 1\nI0515 13:22:29.145352 595 log.go:172] (0xc00083c370) (0xc000954000) Create stream\nI0515 13:22:29.145403 595 log.go:172] (0xc00083c370) (0xc000954000) Stream added, broadcasting: 3\nI0515 13:22:29.146225 595 log.go:172] (0xc00083c370) Reply frame received for 3\nI0515 13:22:29.146259 595 log.go:172] (0xc00083c370) (0xc0009540a0) Create stream\nI0515 13:22:29.146275 595 log.go:172] (0xc00083c370) (0xc0009540a0) Stream added, broadcasting: 5\nI0515 13:22:29.147154 595 log.go:172] (0xc00083c370) Reply frame received for 5\nI0515 13:22:29.242967 595 log.go:172] (0xc00083c370) Data frame received for 3\nI0515 13:22:29.243002 595 log.go:172] (0xc000954000) (3) Data frame handling\nI0515 13:22:29.243015 595 log.go:172] (0xc000954000) (3) Data frame sent\nI0515 13:22:29.243028 595 log.go:172] (0xc00083c370) Data frame received for 3\nI0515 13:22:29.243035 595 log.go:172] (0xc000954000) (3) Data frame handling\nI0515 13:22:29.243044 595 log.go:172] (0xc00083c370) Data frame received for 5\nI0515 13:22:29.243051 595 log.go:172] (0xc0009540a0) (5) Data frame handling\nI0515 13:22:29.243067 595 log.go:172] (0xc0009540a0) (5) Data frame sent\nI0515 13:22:29.243075 595 log.go:172] (0xc00083c370) Data frame received for 5\nI0515 13:22:29.243081 595 log.go:172] (0xc0009540a0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0515 13:22:29.244930 595 log.go:172] (0xc00083c370) Data frame received for 1\nI0515 13:22:29.244951 595 log.go:172] (0xc000724780) (1) Data frame handling\nI0515 13:22:29.244964 595 log.go:172] (0xc000724780) (1) Data frame sent\nI0515 13:22:29.244979 595 log.go:172] (0xc00083c370) (0xc000724780) Stream removed, broadcasting: 1\nI0515 13:22:29.244995 595 log.go:172] (0xc00083c370) Go away received\nI0515 13:22:29.245615 595 log.go:172] (0xc00083c370) (0xc000724780) Stream removed, broadcasting: 1\nI0515 13:22:29.245636 595 log.go:172] (0xc00083c370) (0xc000954000) Stream removed, broadcasting: 3\nI0515 13:22:29.245646 595 log.go:172] (0xc00083c370) (0xc0009540a0) Stream removed, broadcasting: 5\n" May 15 13:22:29.249: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 15 13:22:29.249: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 15 13:22:29.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3777 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 15 13:22:29.517: INFO: stderr: "I0515 13:22:29.385533 615 log.go:172] (0xc0009c02c0) (0xc0009506e0) Create stream\nI0515 13:22:29.385616 615 log.go:172] (0xc0009c02c0) (0xc0009506e0) Stream added, broadcasting: 1\nI0515 13:22:29.387826 615 log.go:172] (0xc0009c02c0) Reply frame received for 1\nI0515 13:22:29.387865 615 log.go:172] (0xc0009c02c0) (0xc000678280) Create stream\nI0515 13:22:29.387877 615 log.go:172] (0xc0009c02c0) (0xc000678280) Stream added, broadcasting: 3\nI0515 13:22:29.389005 615 log.go:172] (0xc0009c02c0) Reply frame received for 3\nI0515 13:22:29.389051 615 log.go:172] (0xc0009c02c0) (0xc000950780) Create stream\nI0515 13:22:29.389064 615 log.go:172] (0xc0009c02c0) (0xc000950780) Stream added, broadcasting: 5\nI0515 13:22:29.390612 615 log.go:172] (0xc0009c02c0) Reply frame received for 5\nI0515 13:22:29.463214 615 log.go:172] (0xc0009c02c0) Data frame received for 5\nI0515 13:22:29.463242 615 log.go:172] (0xc000950780) (5) Data frame handling\nI0515 13:22:29.463263 615 log.go:172] (0xc000950780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0515 13:22:29.505762 615 log.go:172] (0xc0009c02c0) Data frame received for 3\nI0515 13:22:29.505794 615 log.go:172] (0xc000678280) (3) Data frame handling\nI0515 13:22:29.505816 615 log.go:172] (0xc000678280) (3) Data frame sent\nI0515 13:22:29.506045 615 log.go:172] (0xc0009c02c0) Data frame received for 3\nI0515 13:22:29.506080 615 log.go:172] (0xc000678280) (3) Data frame handling\nI0515 13:22:29.506125 615 log.go:172] (0xc0009c02c0) Data frame received for 5\nI0515 13:22:29.506151 615 log.go:172] (0xc000950780) (5) Data frame handling\nI0515 13:22:29.507921 615 log.go:172] (0xc0009c02c0) Data frame received for 1\nI0515 13:22:29.507947 615 log.go:172] (0xc0009506e0) (1) Data frame handling\nI0515 13:22:29.507959 615 log.go:172] (0xc0009506e0) (1) Data frame sent\nI0515 13:22:29.507976 615 log.go:172] (0xc0009c02c0) (0xc0009506e0) Stream removed, broadcasting: 1\nI0515 13:22:29.508003 615 log.go:172] (0xc0009c02c0) Go away received\nI0515 13:22:29.508429 615 log.go:172] (0xc0009c02c0) (0xc0009506e0) Stream removed, broadcasting: 1\nI0515 13:22:29.508456 615 log.go:172] (0xc0009c02c0) (0xc000678280) Stream removed, broadcasting: 3\nI0515 13:22:29.508471 615 log.go:172] (0xc0009c02c0) (0xc000950780) Stream removed, broadcasting: 5\n" May 15 13:22:29.517: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 15 13:22:29.517: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 15 13:22:29.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3777 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 15 13:22:29.787: INFO: stderr: "I0515 13:22:29.637057 635 log.go:172] (0xc000a5e420) (0xc0005c2820) Create stream\nI0515 13:22:29.637237 635 log.go:172] (0xc000a5e420) (0xc0005c2820) Stream added, broadcasting: 1\nI0515 13:22:29.639254 635 log.go:172] (0xc000a5e420) Reply frame received for 1\nI0515 13:22:29.639326 635 log.go:172] (0xc000a5e420) (0xc00094e000) Create stream\nI0515 13:22:29.639368 635 log.go:172] (0xc000a5e420) (0xc00094e000) Stream added, broadcasting: 3\nI0515 13:22:29.640406 635 log.go:172] (0xc000a5e420) Reply frame received for 3\nI0515 13:22:29.640438 635 log.go:172] (0xc000a5e420) (0xc000834000) Create stream\nI0515 13:22:29.640451 635 log.go:172] (0xc000a5e420) (0xc000834000) Stream added, broadcasting: 5\nI0515 13:22:29.641568 635 log.go:172] (0xc000a5e420) Reply frame received for 5\nI0515 13:22:29.711463 635 log.go:172] (0xc000a5e420) Data frame received for 5\nI0515 13:22:29.711485 635 log.go:172] (0xc000834000) (5) Data frame handling\nI0515 13:22:29.711497 635 log.go:172] (0xc000834000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0515 13:22:29.778603 635 log.go:172] (0xc000a5e420) Data frame received for 3\nI0515 13:22:29.778642 635 log.go:172] (0xc00094e000) (3) Data frame handling\nI0515 13:22:29.778675 635 log.go:172] (0xc00094e000) (3) Data frame sent\nI0515 13:22:29.778693 635 log.go:172] (0xc000a5e420) Data frame received for 3\nI0515 13:22:29.778708 635 log.go:172] (0xc00094e000) (3) Data frame handling\nI0515 13:22:29.778827 635 log.go:172] (0xc000a5e420) Data frame received for 5\nI0515 13:22:29.778857 635 log.go:172] (0xc000834000) (5) Data frame handling\nI0515 13:22:29.780803 635 log.go:172] (0xc000a5e420) Data frame received for 1\nI0515 13:22:29.780836 635 log.go:172] (0xc0005c2820) (1) Data frame handling\nI0515 13:22:29.780858 635 log.go:172] (0xc0005c2820) (1) Data frame sent\nI0515 13:22:29.780880 635 log.go:172] (0xc000a5e420) (0xc0005c2820) Stream removed, broadcasting: 1\nI0515 13:22:29.780978 635 log.go:172] (0xc000a5e420) Go away received\nI0515 13:22:29.781602 635 log.go:172] (0xc000a5e420) (0xc0005c2820) Stream removed, broadcasting: 1\nI0515 13:22:29.781627 635 log.go:172] (0xc000a5e420) (0xc00094e000) Stream removed, broadcasting: 3\nI0515 13:22:29.781638 635 log.go:172] (0xc000a5e420) (0xc000834000) Stream removed, broadcasting: 5\n" May 15 13:22:29.787: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 15 13:22:29.787: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 15 13:22:29.787: INFO: Waiting for statefulset status.replicas updated to 0 May 15 13:22:29.791: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 15 13:22:39.800: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 13:22:39.800: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 15 13:22:39.800: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 15 13:22:39.874: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999267s May 15 13:22:40.879: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.931833905s May 15 13:22:41.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.926723521s May 15 13:22:42.891: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.920860841s May 15 13:22:43.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.915388186s May 15 13:22:44.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.911808553s May 15 13:22:45.902: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.907827166s May 15 13:22:46.907: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.903752897s May 15 13:22:47.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.898607506s May 15 13:22:48.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 892.844593ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3777 May 15 13:22:49.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3777 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 15 13:22:50.160: INFO: stderr: "I0515 13:22:50.059446 655 log.go:172] (0xc0003aa420) (0xc00030c820) Create stream\nI0515 13:22:50.059514 655 log.go:172] (0xc0003aa420) (0xc00030c820) Stream added, broadcasting: 1\nI0515 13:22:50.063046 655 log.go:172] (0xc0003aa420) Reply frame received for 1\nI0515 13:22:50.063129 655 log.go:172] (0xc0003aa420) (0xc0007fc000) Create stream\nI0515 13:22:50.063158 655 log.go:172] (0xc0003aa420) (0xc0007fc000) Stream added, broadcasting: 3\nI0515 13:22:50.064574 655 log.go:172] (0xc0003aa420) Reply frame received for 3\nI0515 13:22:50.064665 655 log.go:172] (0xc0003aa420) (0xc00092a000) Create stream\nI0515 13:22:50.064716 655 log.go:172] (0xc0003aa420) (0xc00092a000) Stream added, broadcasting: 5\nI0515 13:22:50.066370 655 log.go:172] (0xc0003aa420) Reply frame received for 5\nI0515 13:22:50.153314 655 log.go:172] (0xc0003aa420) Data frame received for 5\nI0515 13:22:50.153348 655 log.go:172] (0xc00092a000) (5) Data frame handling\nI0515 13:22:50.153364 655 log.go:172] (0xc00092a000) (5) Data frame sent\nI0515 13:22:50.153372 655 log.go:172] (0xc0003aa420) Data frame received for 5\nI0515 13:22:50.153377 655 log.go:172] (0xc00092a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0515 13:22:50.153400 655 log.go:172] (0xc0003aa420) Data frame received for 3\nI0515 13:22:50.153418 655 log.go:172] (0xc0007fc000) (3) Data frame handling\nI0515 13:22:50.153431 655 log.go:172] (0xc0007fc000) (3) Data frame sent\nI0515 13:22:50.153440 655 log.go:172] (0xc0003aa420) Data frame received for 3\nI0515 13:22:50.153448 655 log.go:172] (0xc0007fc000) (3) Data frame handling\nI0515 13:22:50.154892 655 log.go:172] (0xc0003aa420) Data frame received for 1\nI0515 13:22:50.154916 655 log.go:172] (0xc00030c820) (1) Data frame handling\nI0515 13:22:50.154925 655 log.go:172] (0xc00030c820) (1) Data frame sent\nI0515 13:22:50.154936 655 log.go:172] (0xc0003aa420) (0xc00030c820) Stream removed, broadcasting: 1\nI0515 13:22:50.154961 655 log.go:172] (0xc0003aa420) Go away received\nI0515 13:22:50.155174 655 log.go:172] (0xc0003aa420) (0xc00030c820) Stream removed, broadcasting: 1\nI0515 13:22:50.155186 655 log.go:172] (0xc0003aa420) (0xc0007fc000) Stream removed, broadcasting: 3\nI0515 13:22:50.155193 655 log.go:172] (0xc0003aa420) (0xc00092a000) Stream removed, broadcasting: 5\n" May 15 13:22:50.160: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 15 13:22:50.160: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 15 13:22:50.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3777 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 15 13:22:50.344: INFO: stderr: "I0515 13:22:50.276779 677 log.go:172] (0xc0003ba420) (0xc0007dedc0) Create stream\nI0515 13:22:50.276847 677 log.go:172] (0xc0003ba420) (0xc0007dedc0) Stream added, broadcasting: 1\nI0515 13:22:50.279674 677 log.go:172] (0xc0003ba420) Reply frame received for 1\nI0515 13:22:50.279723 677 log.go:172] (0xc0003ba420) (0xc0008e8000) Create stream\nI0515 13:22:50.279738 677 log.go:172] (0xc0003ba420) (0xc0008e8000) Stream added, broadcasting: 3\nI0515 13:22:50.280713 677 log.go:172] (0xc0003ba420) Reply frame received for 3\nI0515 13:22:50.280757 677 log.go:172] (0xc0003ba420) (0xc0006c0000) Create stream\nI0515 13:22:50.280774 677 log.go:172] (0xc0003ba420) (0xc0006c0000) Stream added, broadcasting: 5\nI0515 13:22:50.281930 677 log.go:172] (0xc0003ba420) Reply frame received for 5\nI0515 13:22:50.335647 677 log.go:172] (0xc0003ba420) Data frame received for 3\nI0515 13:22:50.335681 677 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0515 13:22:50.335707 677 log.go:172] (0xc0008e8000) (3) Data frame sent\nI0515 13:22:50.335718 677 log.go:172] (0xc0003ba420) Data frame received for 3\nI0515 13:22:50.335728 677 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0515 13:22:50.335878 677 log.go:172] (0xc0003ba420) Data frame received for 5\nI0515 13:22:50.335906 677 log.go:172] (0xc0006c0000) (5) Data frame handling\nI0515 13:22:50.335927 677 log.go:172] (0xc0006c0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0515 13:22:50.335938 677 log.go:172] (0xc0003ba420) Data frame received for 5\nI0515 13:22:50.335981 677 log.go:172] (0xc0006c0000) (5) Data frame handling\nI0515 13:22:50.337737 677 log.go:172] (0xc0003ba420) Data frame received for 1\nI0515 13:22:50.337752 677 log.go:172] (0xc0007dedc0) (1) Data frame handling\nI0515 13:22:50.337760 677 log.go:172] (0xc0007dedc0) (1) Data frame sent\nI0515 13:22:50.338004 677 log.go:172] (0xc0003ba420) (0xc0007dedc0) Stream removed, broadcasting: 1\nI0515 13:22:50.339681 677 log.go:172] (0xc0003ba420) Go away received\nI0515 13:22:50.339974 677 log.go:172] (0xc0003ba420) (0xc0007dedc0) Stream removed, broadcasting: 1\nI0515 13:22:50.339995 677 log.go:172] (0xc0003ba420) (0xc0008e8000) Stream removed, broadcasting: 3\nI0515 13:22:50.340003 677 log.go:172] (0xc0003ba420) (0xc0006c0000) Stream removed, broadcasting: 5\n" May 15 13:22:50.344: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 15 13:22:50.344: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 15 13:22:50.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3777 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 15 13:22:50.537: INFO: stderr: "I0515 13:22:50.468633 694 log.go:172] (0xc000a749a0) (0xc000a58b40) Create stream\nI0515 13:22:50.468703 694 log.go:172] (0xc000a749a0) (0xc000a58b40) Stream added, broadcasting: 1\nI0515 13:22:50.472556 694 log.go:172] (0xc000a749a0) Reply frame received for 1\nI0515 13:22:50.472618 694 log.go:172] (0xc000a749a0) (0xc000ac4140) Create stream\nI0515 13:22:50.472652 694 log.go:172] (0xc000a749a0) (0xc000ac4140) Stream added, broadcasting: 3\nI0515 13:22:50.473886 694 log.go:172] (0xc000a749a0) Reply frame received for 3\nI0515 13:22:50.473937 694 log.go:172] (0xc000a749a0) (0xc000ac4000) Create stream\nI0515 13:22:50.473976 694 log.go:172] (0xc000a749a0) (0xc000ac4000) Stream added, broadcasting: 5\nI0515 13:22:50.475825 694 log.go:172] (0xc000a749a0) Reply frame received for 5\nI0515 13:22:50.530358 694 log.go:172] (0xc000a749a0) Data frame received for 5\nI0515 13:22:50.530389 694 log.go:172] (0xc000ac4000) (5) Data frame handling\nI0515 13:22:50.530399 694 log.go:172] (0xc000ac4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0515 13:22:50.530623 694 log.go:172] (0xc000a749a0) Data frame received for 3\nI0515 13:22:50.530639 694 log.go:172] (0xc000ac4140) (3) Data frame handling\nI0515 13:22:50.530647 694 log.go:172] (0xc000ac4140) (3) Data frame sent\nI0515 13:22:50.530652 694 log.go:172] (0xc000a749a0) Data frame received for 3\nI0515 13:22:50.530656 694 log.go:172] (0xc000ac4140) (3) Data frame handling\nI0515 13:22:50.530893 694 log.go:172] (0xc000a749a0) Data frame received for 5\nI0515 13:22:50.530926 694 log.go:172] (0xc000ac4000) (5) Data frame handling\nI0515 13:22:50.532282 694 log.go:172] (0xc000a749a0) Data frame received for 1\nI0515 13:22:50.532300 694 log.go:172] (0xc000a58b40) (1) Data frame handling\nI0515 13:22:50.532307 694 log.go:172] (0xc000a58b40) (1) Data frame sent\nI0515 13:22:50.532317 694 log.go:172] (0xc000a749a0) (0xc000a58b40) Stream removed, broadcasting: 1\nI0515 13:22:50.532328 694 log.go:172] (0xc000a749a0) Go away received\nI0515 13:22:50.532619 694 log.go:172] (0xc000a749a0) (0xc000a58b40) Stream removed, broadcasting: 1\nI0515 13:22:50.532643 694 log.go:172] (0xc000a749a0) (0xc000ac4140) Stream removed, broadcasting: 3\nI0515 13:22:50.532652 694 log.go:172] (0xc000a749a0) (0xc000ac4000) Stream removed, broadcasting: 5\n" May 15 13:22:50.538: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 15 13:22:50.538: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 15 13:22:50.538: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 15 13:23:20.600: INFO: Deleting all statefulset in ns statefulset-3777 May 15 13:23:20.602: INFO: Scaling statefulset ss to 0 May 15 13:23:20.610: INFO: Waiting for statefulset status.replicas updated to 0 May 15 13:23:20.612: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:23:20.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3777" for this suite. May 15 13:23:26.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:23:26.723: INFO: namespace statefulset-3777 deletion completed in 6.097652406s • [SLOW TEST:98.639 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:23:26.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 13:23:26.912: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d588be64-edb4-47aa-8f4e-16b11cd552f8", Controller:(*bool)(0xc002859dea), BlockOwnerDeletion:(*bool)(0xc002859deb)}} May 15 13:23:26.928: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"834dca10-fb08-4056-911f-39853e5f8253", Controller:(*bool)(0xc00218408a), BlockOwnerDeletion:(*bool)(0xc00218408b)}} May 15 13:23:26.957: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"285aca32-f987-43bd-ba88-ef1048c1a662", Controller:(*bool)(0xc0030a07fa), BlockOwnerDeletion:(*bool)(0xc0030a07fb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:23:31.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3176" for this suite. May 15 13:23:38.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:23:38.085: INFO: namespace gc-3176 deletion completed in 6.08985292s • [SLOW TEST:11.361 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:23:38.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 15 13:23:38.160: INFO: Waiting up to 5m0s for pod "pod-af614ef1-d99e-4568-aee5-40e5c1d6dcda" in namespace "emptydir-3783" to be "success or failure" May 15 13:23:38.176: INFO: Pod "pod-af614ef1-d99e-4568-aee5-40e5c1d6dcda": Phase="Pending", Reason="", readiness=false. Elapsed: 15.740466ms May 15 13:23:40.179: INFO: Pod "pod-af614ef1-d99e-4568-aee5-40e5c1d6dcda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01945757s May 15 13:23:42.192: INFO: Pod "pod-af614ef1-d99e-4568-aee5-40e5c1d6dcda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032023012s STEP: Saw pod success May 15 13:23:42.192: INFO: Pod "pod-af614ef1-d99e-4568-aee5-40e5c1d6dcda" satisfied condition "success or failure" May 15 13:23:42.195: INFO: Trying to get logs from node iruya-worker pod pod-af614ef1-d99e-4568-aee5-40e5c1d6dcda container test-container: STEP: delete the pod May 15 13:23:42.237: INFO: Waiting for pod pod-af614ef1-d99e-4568-aee5-40e5c1d6dcda to disappear May 15 13:23:42.247: INFO: Pod pod-af614ef1-d99e-4568-aee5-40e5c1d6dcda no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:23:42.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3783" for this suite. May 15 13:23:48.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:23:48.339: INFO: namespace emptydir-3783 deletion completed in 6.088512576s • [SLOW TEST:10.254 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:23:48.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-8cf607c1-7250-4d07-9f04-7493310be35b May 15 13:23:48.418: INFO: Pod name my-hostname-basic-8cf607c1-7250-4d07-9f04-7493310be35b: Found 0 pods out of 1 May 15 13:23:53.423: INFO: Pod name my-hostname-basic-8cf607c1-7250-4d07-9f04-7493310be35b: Found 1 pods out of 1 May 15 13:23:53.423: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-8cf607c1-7250-4d07-9f04-7493310be35b" are running May 15 13:23:53.426: INFO: Pod "my-hostname-basic-8cf607c1-7250-4d07-9f04-7493310be35b-z8rzg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 13:23:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 13:23:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 13:23:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 13:23:48 +0000 UTC Reason: Message:}]) May 15 13:23:53.426: INFO: Trying to dial the pod May 15 13:23:58.478: INFO: Controller my-hostname-basic-8cf607c1-7250-4d07-9f04-7493310be35b: Got expected result from replica 1 [my-hostname-basic-8cf607c1-7250-4d07-9f04-7493310be35b-z8rzg]: "my-hostname-basic-8cf607c1-7250-4d07-9f04-7493310be35b-z8rzg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:23:58.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2976" for this suite. May 15 13:24:04.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:24:04.770: INFO: namespace replication-controller-2976 deletion completed in 6.288050864s • [SLOW TEST:16.431 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:24:04.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-78bfd753-93ee-43f5-9f83-24cc6d446250 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-78bfd753-93ee-43f5-9f83-24cc6d446250 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:25:38.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9761" for this suite. May 15 13:26:00.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:26:00.767: INFO: namespace projected-9761 deletion completed in 22.140426791s • [SLOW TEST:115.996 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:26:00.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-74f2322d-78e4-4bc0-8028-b69269b6d9ae STEP: Creating a pod to test consume secrets May 15 13:26:00.839: INFO: Waiting up to 5m0s for pod "pod-secrets-3914eccd-a55d-47dd-9d57-64adfd423d21" in namespace "secrets-2422" to be "success or failure" May 15 13:26:00.844: INFO: Pod "pod-secrets-3914eccd-a55d-47dd-9d57-64adfd423d21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.622985ms May 15 13:26:03.106: INFO: Pod "pod-secrets-3914eccd-a55d-47dd-9d57-64adfd423d21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.266370412s May 15 13:26:05.110: INFO: Pod "pod-secrets-3914eccd-a55d-47dd-9d57-64adfd423d21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.270283303s STEP: Saw pod success May 15 13:26:05.110: INFO: Pod "pod-secrets-3914eccd-a55d-47dd-9d57-64adfd423d21" satisfied condition "success or failure" May 15 13:26:05.112: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-3914eccd-a55d-47dd-9d57-64adfd423d21 container secret-volume-test: STEP: delete the pod May 15 13:26:05.145: INFO: Waiting for pod pod-secrets-3914eccd-a55d-47dd-9d57-64adfd423d21 to disappear May 15 13:26:05.302: INFO: Pod pod-secrets-3914eccd-a55d-47dd-9d57-64adfd423d21 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:26:05.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2422" for this suite. May 15 13:26:11.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:26:11.457: INFO: namespace secrets-2422 deletion completed in 6.151518364s • [SLOW TEST:10.690 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:26:11.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 15 13:26:11.552: INFO: Waiting up to 5m0s for pod "pod-438f2108-768d-49fb-9427-878be77f6a56" in namespace "emptydir-5935" to be "success or failure" May 15 13:26:11.565: INFO: Pod "pod-438f2108-768d-49fb-9427-878be77f6a56": Phase="Pending", Reason="", readiness=false. Elapsed: 13.123772ms May 15 13:26:13.848: INFO: Pod "pod-438f2108-768d-49fb-9427-878be77f6a56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295924613s May 15 13:26:15.865: INFO: Pod "pod-438f2108-768d-49fb-9427-878be77f6a56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.313140216s STEP: Saw pod success May 15 13:26:15.865: INFO: Pod "pod-438f2108-768d-49fb-9427-878be77f6a56" satisfied condition "success or failure" May 15 13:26:15.867: INFO: Trying to get logs from node iruya-worker pod pod-438f2108-768d-49fb-9427-878be77f6a56 container test-container: STEP: delete the pod May 15 13:26:15.883: INFO: Waiting for pod pod-438f2108-768d-49fb-9427-878be77f6a56 to disappear May 15 13:26:15.912: INFO: Pod pod-438f2108-768d-49fb-9427-878be77f6a56 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:26:15.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5935" for this suite. May 15 13:26:21.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:26:22.006: INFO: namespace emptydir-5935 deletion completed in 6.090576651s • [SLOW TEST:10.547 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:26:22.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:26:22.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb770a41-dd1b-4dda-9ae3-b4f0816dbee7" in namespace "downward-api-1327" to be "success or failure" May 15 13:26:22.142: INFO: Pod "downwardapi-volume-cb770a41-dd1b-4dda-9ae3-b4f0816dbee7": Phase="Pending", Reason="", readiness=false. Elapsed: 46.893479ms May 15 13:26:24.597: INFO: Pod "downwardapi-volume-cb770a41-dd1b-4dda-9ae3-b4f0816dbee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502375648s May 15 13:26:26.601: INFO: Pod "downwardapi-volume-cb770a41-dd1b-4dda-9ae3-b4f0816dbee7": Phase="Running", Reason="", readiness=true. Elapsed: 4.506259624s May 15 13:26:28.605: INFO: Pod "downwardapi-volume-cb770a41-dd1b-4dda-9ae3-b4f0816dbee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.510258203s STEP: Saw pod success May 15 13:26:28.605: INFO: Pod "downwardapi-volume-cb770a41-dd1b-4dda-9ae3-b4f0816dbee7" satisfied condition "success or failure" May 15 13:26:28.608: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cb770a41-dd1b-4dda-9ae3-b4f0816dbee7 container client-container: STEP: delete the pod May 15 13:26:28.652: INFO: Waiting for pod downwardapi-volume-cb770a41-dd1b-4dda-9ae3-b4f0816dbee7 to disappear May 15 13:26:28.806: INFO: Pod downwardapi-volume-cb770a41-dd1b-4dda-9ae3-b4f0816dbee7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:26:28.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1327" for this suite. May 15 13:26:34.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:26:34.918: INFO: namespace downward-api-1327 deletion completed in 6.109409232s • [SLOW TEST:12.912 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:26:34.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 15 13:26:35.011: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1653" to be "success or failure" May 15 13:26:35.025: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.769561ms May 15 13:26:37.094: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082983713s May 15 13:26:39.097: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086358488s May 15 13:26:41.101: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090231705s STEP: Saw pod success May 15 13:26:41.101: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 15 13:26:41.104: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 15 13:26:41.117: INFO: Waiting for pod pod-host-path-test to disappear May 15 13:26:41.122: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:26:41.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1653" for this suite. May 15 13:26:47.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:26:47.259: INFO: namespace hostpath-1653 deletion completed in 6.134281555s • [SLOW TEST:12.340 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:26:47.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0515 13:26:48.406129 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 13:26:48.406: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:26:48.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-672" for this suite. May 15 13:26:54.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:26:54.483: INFO: namespace gc-672 deletion completed in 6.073143281s • [SLOW TEST:7.223 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:26:54.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-5e01f5f2-73fb-45bf-a190-c979b0d24a4b STEP: Creating a pod to test consume configMaps May 15 13:26:54.627: INFO: Waiting up to 5m0s for pod "pod-configmaps-8e852fa1-9d5f-4d15-85de-6ef50a150c3e" in namespace "configmap-6175" to be "success or failure" May 15 13:26:54.660: INFO: Pod "pod-configmaps-8e852fa1-9d5f-4d15-85de-6ef50a150c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 32.72099ms May 15 13:26:56.665: INFO: Pod "pod-configmaps-8e852fa1-9d5f-4d15-85de-6ef50a150c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037358953s May 15 13:26:58.669: INFO: Pod "pod-configmaps-8e852fa1-9d5f-4d15-85de-6ef50a150c3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042107935s STEP: Saw pod success May 15 13:26:58.669: INFO: Pod "pod-configmaps-8e852fa1-9d5f-4d15-85de-6ef50a150c3e" satisfied condition "success or failure" May 15 13:26:58.698: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8e852fa1-9d5f-4d15-85de-6ef50a150c3e container configmap-volume-test: STEP: delete the pod May 15 13:26:58.729: INFO: Waiting for pod pod-configmaps-8e852fa1-9d5f-4d15-85de-6ef50a150c3e to disappear May 15 13:26:58.744: INFO: Pod pod-configmaps-8e852fa1-9d5f-4d15-85de-6ef50a150c3e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:26:58.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6175" for this suite. May 15 13:27:04.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:27:05.082: INFO: namespace configmap-6175 deletion completed in 6.334168957s • [SLOW TEST:10.599 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:27:05.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 15 13:27:05.171: INFO: Waiting up to 5m0s for pod "pod-cbfad346-810d-4c0b-825b-2023163dba36" in namespace "emptydir-9721" to be "success or failure" May 15 13:27:05.199: INFO: Pod "pod-cbfad346-810d-4c0b-825b-2023163dba36": Phase="Pending", Reason="", readiness=false. Elapsed: 27.838729ms May 15 13:27:07.204: INFO: Pod "pod-cbfad346-810d-4c0b-825b-2023163dba36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032215416s May 15 13:27:09.208: INFO: Pod "pod-cbfad346-810d-4c0b-825b-2023163dba36": Phase="Running", Reason="", readiness=true. Elapsed: 4.036403682s May 15 13:27:11.212: INFO: Pod "pod-cbfad346-810d-4c0b-825b-2023163dba36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040910363s STEP: Saw pod success May 15 13:27:11.212: INFO: Pod "pod-cbfad346-810d-4c0b-825b-2023163dba36" satisfied condition "success or failure" May 15 13:27:11.217: INFO: Trying to get logs from node iruya-worker pod pod-cbfad346-810d-4c0b-825b-2023163dba36 container test-container: STEP: delete the pod May 15 13:27:11.236: INFO: Waiting for pod pod-cbfad346-810d-4c0b-825b-2023163dba36 to disappear May 15 13:27:11.241: INFO: Pod pod-cbfad346-810d-4c0b-825b-2023163dba36 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:27:11.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9721" for this suite. May 15 13:27:17.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:27:17.334: INFO: namespace emptydir-9721 deletion completed in 6.090429411s • [SLOW TEST:12.252 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:27:17.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-b444dae4-d8d1-48a5-bada-a1d0393be4c6 STEP: Creating a pod to test consume configMaps May 15 13:27:17.417: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a269c63-eb54-4d59-a316-10921ef63406" in namespace "projected-6123" to be "success or failure" May 15 13:27:17.435: INFO: Pod "pod-projected-configmaps-2a269c63-eb54-4d59-a316-10921ef63406": Phase="Pending", Reason="", readiness=false. Elapsed: 17.555002ms May 15 13:27:19.438: INFO: Pod "pod-projected-configmaps-2a269c63-eb54-4d59-a316-10921ef63406": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020807906s May 15 13:27:21.442: INFO: Pod "pod-projected-configmaps-2a269c63-eb54-4d59-a316-10921ef63406": Phase="Running", Reason="", readiness=true. Elapsed: 4.024730107s May 15 13:27:23.445: INFO: Pod "pod-projected-configmaps-2a269c63-eb54-4d59-a316-10921ef63406": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028013265s STEP: Saw pod success May 15 13:27:23.445: INFO: Pod "pod-projected-configmaps-2a269c63-eb54-4d59-a316-10921ef63406" satisfied condition "success or failure" May 15 13:27:23.447: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-2a269c63-eb54-4d59-a316-10921ef63406 container projected-configmap-volume-test: STEP: delete the pod May 15 13:27:23.466: INFO: Waiting for pod pod-projected-configmaps-2a269c63-eb54-4d59-a316-10921ef63406 to disappear May 15 13:27:23.482: INFO: Pod pod-projected-configmaps-2a269c63-eb54-4d59-a316-10921ef63406 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:27:23.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6123" for this suite. May 15 13:27:29.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:27:29.637: INFO: namespace projected-6123 deletion completed in 6.129860374s • [SLOW TEST:12.303 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:27:29.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 15 13:27:29.822: INFO: Waiting up to 5m0s for pod "var-expansion-4d5a1fd3-2c3d-4755-9932-a44a6b17d956" in namespace "var-expansion-4105" to be "success or failure" May 15 13:27:29.839: INFO: Pod "var-expansion-4d5a1fd3-2c3d-4755-9932-a44a6b17d956": Phase="Pending", Reason="", readiness=false. Elapsed: 16.532632ms May 15 13:27:31.843: INFO: Pod "var-expansion-4d5a1fd3-2c3d-4755-9932-a44a6b17d956": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020288441s May 15 13:27:34.017: INFO: Pod "var-expansion-4d5a1fd3-2c3d-4755-9932-a44a6b17d956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.194992831s STEP: Saw pod success May 15 13:27:34.017: INFO: Pod "var-expansion-4d5a1fd3-2c3d-4755-9932-a44a6b17d956" satisfied condition "success or failure" May 15 13:27:34.021: INFO: Trying to get logs from node iruya-worker pod var-expansion-4d5a1fd3-2c3d-4755-9932-a44a6b17d956 container dapi-container: STEP: delete the pod May 15 13:27:34.174: INFO: Waiting for pod var-expansion-4d5a1fd3-2c3d-4755-9932-a44a6b17d956 to disappear May 15 13:27:34.182: INFO: Pod var-expansion-4d5a1fd3-2c3d-4755-9932-a44a6b17d956 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:27:34.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4105" for this suite. May 15 13:27:40.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:27:40.279: INFO: namespace var-expansion-4105 deletion completed in 6.094105347s • [SLOW TEST:10.641 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:27:40.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 15 13:27:40.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1021' May 15 13:27:43.559: INFO: stderr: "" May 15 13:27:43.559: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 13:27:43.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1021' May 15 13:27:43.705: INFO: stderr: "" May 15 13:27:43.705: INFO: stdout: "update-demo-nautilus-8rdpp update-demo-nautilus-pltq7 " May 15 13:27:43.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8rdpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:27:43.793: INFO: stderr: "" May 15 13:27:43.793: INFO: stdout: "" May 15 13:27:43.793: INFO: update-demo-nautilus-8rdpp is created but not running May 15 13:27:48.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1021' May 15 13:27:48.894: INFO: stderr: "" May 15 13:27:48.894: INFO: stdout: "update-demo-nautilus-8rdpp update-demo-nautilus-pltq7 " May 15 13:27:48.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8rdpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:27:48.993: INFO: stderr: "" May 15 13:27:48.993: INFO: stdout: "true" May 15 13:27:48.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8rdpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:27:49.092: INFO: stderr: "" May 15 13:27:49.092: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 13:27:49.092: INFO: validating pod update-demo-nautilus-8rdpp May 15 13:27:49.096: INFO: got data: { "image": "nautilus.jpg" } May 15 13:27:49.096: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 13:27:49.096: INFO: update-demo-nautilus-8rdpp is verified up and running May 15 13:27:49.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pltq7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:27:49.187: INFO: stderr: "" May 15 13:27:49.187: INFO: stdout: "true" May 15 13:27:49.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pltq7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:27:49.287: INFO: stderr: "" May 15 13:27:49.287: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 13:27:49.287: INFO: validating pod update-demo-nautilus-pltq7 May 15 13:27:49.308: INFO: got data: { "image": "nautilus.jpg" } May 15 13:27:49.308: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 13:27:49.308: INFO: update-demo-nautilus-pltq7 is verified up and running STEP: scaling down the replication controller May 15 13:27:49.310: INFO: scanned /root for discovery docs: May 15 13:27:49.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1021' May 15 13:27:50.435: INFO: stderr: "" May 15 13:27:50.435: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 13:27:50.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1021' May 15 13:27:50.529: INFO: stderr: "" May 15 13:27:50.529: INFO: stdout: "update-demo-nautilus-8rdpp update-demo-nautilus-pltq7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 15 13:27:55.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1021' May 15 13:27:55.629: INFO: stderr: "" May 15 13:27:55.629: INFO: stdout: "update-demo-nautilus-8rdpp update-demo-nautilus-pltq7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 15 13:28:00.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1021' May 15 13:28:00.724: INFO: stderr: "" May 15 13:28:00.724: INFO: stdout: "update-demo-nautilus-8rdpp update-demo-nautilus-pltq7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 15 13:28:05.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1021' May 15 13:28:05.822: INFO: stderr: "" May 15 13:28:05.823: INFO: stdout: "update-demo-nautilus-pltq7 " May 15 13:28:05.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pltq7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:28:05.914: INFO: stderr: "" May 15 13:28:05.914: INFO: stdout: "true" May 15 13:28:05.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pltq7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:28:06.009: INFO: stderr: "" May 15 13:28:06.009: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 13:28:06.009: INFO: validating pod update-demo-nautilus-pltq7 May 15 13:28:06.012: INFO: got data: { "image": "nautilus.jpg" } May 15 13:28:06.012: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 13:28:06.012: INFO: update-demo-nautilus-pltq7 is verified up and running STEP: scaling up the replication controller May 15 13:28:06.014: INFO: scanned /root for discovery docs: May 15 13:28:06.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1021' May 15 13:28:07.138: INFO: stderr: "" May 15 13:28:07.138: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 13:28:07.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1021' May 15 13:28:07.247: INFO: stderr: "" May 15 13:28:07.247: INFO: stdout: "update-demo-nautilus-9nddm update-demo-nautilus-pltq7 " May 15 13:28:07.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9nddm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:28:07.349: INFO: stderr: "" May 15 13:28:07.349: INFO: stdout: "" May 15 13:28:07.350: INFO: update-demo-nautilus-9nddm is created but not running May 15 13:28:12.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1021' May 15 13:28:12.442: INFO: stderr: "" May 15 13:28:12.442: INFO: stdout: "update-demo-nautilus-9nddm update-demo-nautilus-pltq7 " May 15 13:28:12.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9nddm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:28:12.528: INFO: stderr: "" May 15 13:28:12.528: INFO: stdout: "true" May 15 13:28:12.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9nddm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:28:12.661: INFO: stderr: "" May 15 13:28:12.661: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 13:28:12.661: INFO: validating pod update-demo-nautilus-9nddm May 15 13:28:12.684: INFO: got data: { "image": "nautilus.jpg" } May 15 13:28:12.684: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 13:28:12.684: INFO: update-demo-nautilus-9nddm is verified up and running May 15 13:28:12.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pltq7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:28:12.803: INFO: stderr: "" May 15 13:28:12.803: INFO: stdout: "true" May 15 13:28:12.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pltq7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1021' May 15 13:28:12.893: INFO: stderr: "" May 15 13:28:12.893: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 13:28:12.893: INFO: validating pod update-demo-nautilus-pltq7 May 15 13:28:12.896: INFO: got data: { "image": "nautilus.jpg" } May 15 13:28:12.896: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 13:28:12.896: INFO: update-demo-nautilus-pltq7 is verified up and running STEP: using delete to clean up resources May 15 13:28:12.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1021' May 15 13:28:12.998: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 13:28:12.998: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 15 13:28:12.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1021' May 15 13:28:13.103: INFO: stderr: "No resources found.\n" May 15 13:28:13.103: INFO: stdout: "" May 15 13:28:13.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1021 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 13:28:13.199: INFO: stderr: "" May 15 13:28:13.199: INFO: stdout: "update-demo-nautilus-9nddm\nupdate-demo-nautilus-pltq7\n" May 15 13:28:13.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1021' May 15 13:28:13.872: INFO: stderr: "No resources found.\n" May 15 13:28:13.872: INFO: stdout: "" May 15 13:28:13.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1021 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 13:28:13.963: INFO: stderr: "" May 15 13:28:13.963: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:28:13.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1021" for this suite. May 15 13:28:20.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:28:20.108: INFO: namespace kubectl-1021 deletion completed in 6.141340606s • [SLOW TEST:39.828 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:28:20.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-7f718d26-8cc2-450b-84c9-b42230f6a3c6 STEP: Creating secret with name secret-projected-all-test-volume-0d66443c-41aa-4a52-ad64-a879270ba3b3 STEP: Creating a pod to test Check all projections for projected volume plugin May 15 13:28:20.235: INFO: Waiting up to 5m0s for pod "projected-volume-8ff9bf53-8a13-4654-b2a1-3c3eeea76f99" in namespace "projected-186" to be "success or failure" May 15 13:28:20.243: INFO: Pod "projected-volume-8ff9bf53-8a13-4654-b2a1-3c3eeea76f99": Phase="Pending", Reason="", readiness=false. Elapsed: 7.700428ms May 15 13:28:22.247: INFO: Pod "projected-volume-8ff9bf53-8a13-4654-b2a1-3c3eeea76f99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01138471s May 15 13:28:24.250: INFO: Pod "projected-volume-8ff9bf53-8a13-4654-b2a1-3c3eeea76f99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014295562s STEP: Saw pod success May 15 13:28:24.250: INFO: Pod "projected-volume-8ff9bf53-8a13-4654-b2a1-3c3eeea76f99" satisfied condition "success or failure" May 15 13:28:24.252: INFO: Trying to get logs from node iruya-worker pod projected-volume-8ff9bf53-8a13-4654-b2a1-3c3eeea76f99 container projected-all-volume-test: STEP: delete the pod May 15 13:28:24.342: INFO: Waiting for pod projected-volume-8ff9bf53-8a13-4654-b2a1-3c3eeea76f99 to disappear May 15 13:28:24.368: INFO: Pod projected-volume-8ff9bf53-8a13-4654-b2a1-3c3eeea76f99 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:28:24.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-186" for this suite. May 15 13:28:30.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:28:30.498: INFO: namespace projected-186 deletion completed in 6.12606714s • [SLOW TEST:10.389 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:28:30.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 15 13:28:30.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2012' May 15 13:28:30.862: INFO: stderr: "" May 15 13:28:30.862: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 15 13:28:31.874: INFO: Selector matched 1 pods for map[app:redis] May 15 13:28:31.874: INFO: Found 0 / 1 May 15 13:28:32.910: INFO: Selector matched 1 pods for map[app:redis] May 15 13:28:32.910: INFO: Found 0 / 1 May 15 13:28:33.892: INFO: Selector matched 1 pods for map[app:redis] May 15 13:28:33.892: INFO: Found 0 / 1 May 15 13:28:34.892: INFO: Selector matched 1 pods for map[app:redis] May 15 13:28:34.892: INFO: Found 1 / 1 May 15 13:28:34.892: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 15 13:28:34.894: INFO: Selector matched 1 pods for map[app:redis] May 15 13:28:34.894: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 15 13:28:34.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9k2l redis-master --namespace=kubectl-2012' May 15 13:28:34.980: INFO: stderr: "" May 15 13:28:34.980: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 May 13:28:33.859 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 May 13:28:33.859 # Server started, Redis version 3.2.12\n1:M 15 May 13:28:33.859 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 May 13:28:33.859 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 15 13:28:34.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9k2l redis-master --namespace=kubectl-2012 --tail=1' May 15 13:28:35.074: INFO: stderr: "" May 15 13:28:35.074: INFO: stdout: "1:M 15 May 13:28:33.859 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 15 13:28:35.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9k2l redis-master --namespace=kubectl-2012 --limit-bytes=1' May 15 13:28:35.173: INFO: stderr: "" May 15 13:28:35.173: INFO: stdout: " " STEP: exposing timestamps May 15 13:28:35.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9k2l redis-master --namespace=kubectl-2012 --tail=1 --timestamps' May 15 13:28:35.280: INFO: stderr: "" May 15 13:28:35.280: INFO: stdout: "2020-05-15T13:28:33.859965572Z 1:M 15 May 13:28:33.859 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 15 13:28:37.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9k2l redis-master --namespace=kubectl-2012 --since=1s' May 15 13:28:37.887: INFO: stderr: "" May 15 13:28:37.887: INFO: stdout: "" May 15 13:28:37.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n9k2l redis-master --namespace=kubectl-2012 --since=24h' May 15 13:28:37.990: INFO: stderr: "" May 15 13:28:37.990: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 May 13:28:33.859 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 May 13:28:33.859 # Server started, Redis version 3.2.12\n1:M 15 May 13:28:33.859 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 May 13:28:33.859 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 15 13:28:37.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2012' May 15 13:28:38.081: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 13:28:38.082: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 15 13:28:38.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2012' May 15 13:28:38.191: INFO: stderr: "No resources found.\n" May 15 13:28:38.191: INFO: stdout: "" May 15 13:28:38.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2012 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 13:28:38.297: INFO: stderr: "" May 15 13:28:38.297: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:28:38.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2012" for this suite. May 15 13:29:00.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:29:00.376: INFO: namespace kubectl-2012 deletion completed in 22.075260968s • [SLOW TEST:29.878 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:29:00.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 15 13:29:05.518: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:29:06.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1289" for this suite. May 15 13:29:28.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:29:28.709: INFO: namespace replicaset-1289 deletion completed in 22.094625219s • [SLOW TEST:28.333 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:29:28.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 13:29:52.832: INFO: Container started at 2020-05-15 13:29:31 +0000 UTC, pod became ready at 2020-05-15 13:29:51 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:29:52.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1662" for this suite. May 15 13:30:14.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:30:14.976: INFO: namespace container-probe-1662 deletion completed in 22.139696468s • [SLOW TEST:46.266 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:30:14.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:30:15.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f329eb7-6c3d-4f1c-b3a0-ba8a8b4d9b0e" in namespace "downward-api-4426" to be "success or failure" May 15 13:30:15.259: INFO: Pod "downwardapi-volume-7f329eb7-6c3d-4f1c-b3a0-ba8a8b4d9b0e": Phase="Pending", Reason="", readiness=false. Elapsed: 124.40311ms May 15 13:30:17.263: INFO: Pod "downwardapi-volume-7f329eb7-6c3d-4f1c-b3a0-ba8a8b4d9b0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128545692s May 15 13:30:19.268: INFO: Pod "downwardapi-volume-7f329eb7-6c3d-4f1c-b3a0-ba8a8b4d9b0e": Phase="Running", Reason="", readiness=true. Elapsed: 4.133093087s May 15 13:30:21.272: INFO: Pod "downwardapi-volume-7f329eb7-6c3d-4f1c-b3a0-ba8a8b4d9b0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.137336024s STEP: Saw pod success May 15 13:30:21.272: INFO: Pod "downwardapi-volume-7f329eb7-6c3d-4f1c-b3a0-ba8a8b4d9b0e" satisfied condition "success or failure" May 15 13:30:21.275: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7f329eb7-6c3d-4f1c-b3a0-ba8a8b4d9b0e container client-container: STEP: delete the pod May 15 13:30:21.314: INFO: Waiting for pod downwardapi-volume-7f329eb7-6c3d-4f1c-b3a0-ba8a8b4d9b0e to disappear May 15 13:30:21.330: INFO: Pod downwardapi-volume-7f329eb7-6c3d-4f1c-b3a0-ba8a8b4d9b0e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:30:21.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4426" for this suite. May 15 13:30:27.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:30:27.443: INFO: namespace downward-api-4426 deletion completed in 6.109466687s • [SLOW TEST:12.467 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:30:27.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:30:27.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee2bd7b3-314e-4f2e-a247-c2ce5575f52e" in namespace "downward-api-2453" to be "success or failure" May 15 13:30:27.804: INFO: Pod "downwardapi-volume-ee2bd7b3-314e-4f2e-a247-c2ce5575f52e": Phase="Pending", Reason="", readiness=false. Elapsed: 45.579038ms May 15 13:30:29.809: INFO: Pod "downwardapi-volume-ee2bd7b3-314e-4f2e-a247-c2ce5575f52e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050523659s May 15 13:30:31.813: INFO: Pod "downwardapi-volume-ee2bd7b3-314e-4f2e-a247-c2ce5575f52e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054841068s STEP: Saw pod success May 15 13:30:31.814: INFO: Pod "downwardapi-volume-ee2bd7b3-314e-4f2e-a247-c2ce5575f52e" satisfied condition "success or failure" May 15 13:30:31.816: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ee2bd7b3-314e-4f2e-a247-c2ce5575f52e container client-container: STEP: delete the pod May 15 13:30:31.873: INFO: Waiting for pod downwardapi-volume-ee2bd7b3-314e-4f2e-a247-c2ce5575f52e to disappear May 15 13:30:31.882: INFO: Pod downwardapi-volume-ee2bd7b3-314e-4f2e-a247-c2ce5575f52e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:30:31.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2453" for this suite. May 15 13:30:37.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:30:37.961: INFO: namespace downward-api-2453 deletion completed in 6.076356029s • [SLOW TEST:10.518 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:30:37.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 15 13:30:38.021: INFO: Waiting up to 5m0s for pod "client-containers-a0d85624-3fdd-4b42-a1fa-fa6b597e15c9" in namespace "containers-3690" to be "success or failure" May 15 13:30:38.025: INFO: Pod "client-containers-a0d85624-3fdd-4b42-a1fa-fa6b597e15c9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.627158ms May 15 13:30:40.030: INFO: Pod "client-containers-a0d85624-3fdd-4b42-a1fa-fa6b597e15c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008116413s May 15 13:30:42.033: INFO: Pod "client-containers-a0d85624-3fdd-4b42-a1fa-fa6b597e15c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012014234s STEP: Saw pod success May 15 13:30:42.033: INFO: Pod "client-containers-a0d85624-3fdd-4b42-a1fa-fa6b597e15c9" satisfied condition "success or failure" May 15 13:30:42.036: INFO: Trying to get logs from node iruya-worker2 pod client-containers-a0d85624-3fdd-4b42-a1fa-fa6b597e15c9 container test-container: STEP: delete the pod May 15 13:30:42.112: INFO: Waiting for pod client-containers-a0d85624-3fdd-4b42-a1fa-fa6b597e15c9 to disappear May 15 13:30:42.127: INFO: Pod client-containers-a0d85624-3fdd-4b42-a1fa-fa6b597e15c9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:30:42.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3690" for this suite. May 15 13:30:48.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:30:48.219: INFO: namespace containers-3690 deletion completed in 6.088066953s • [SLOW TEST:10.257 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:30:48.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 15 13:30:48.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1642' May 15 13:30:48.406: INFO: stderr: "" May 15 13:30:48.407: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 15 13:30:48.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1642' May 15 13:31:01.868: INFO: stderr: "" May 15 13:31:01.868: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:31:01.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1642" for this suite. May 15 13:31:07.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:31:07.965: INFO: namespace kubectl-1642 deletion completed in 6.088128294s • [SLOW TEST:19.746 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:31:07.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:31:13.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3414" for this suite. May 15 13:31:19.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:31:19.702: INFO: namespace watch-3414 deletion completed in 6.179578617s • [SLOW TEST:11.736 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:31:19.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:31:19.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8b987e5-02d7-4deb-96b6-7578f22ef9f4" in namespace "projected-2471" to be "success or failure" May 15 13:31:19.810: INFO: Pod "downwardapi-volume-d8b987e5-02d7-4deb-96b6-7578f22ef9f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.720151ms May 15 13:31:21.835: INFO: Pod "downwardapi-volume-d8b987e5-02d7-4deb-96b6-7578f22ef9f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029258743s May 15 13:31:23.900: INFO: Pod "downwardapi-volume-d8b987e5-02d7-4deb-96b6-7578f22ef9f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094345489s STEP: Saw pod success May 15 13:31:23.900: INFO: Pod "downwardapi-volume-d8b987e5-02d7-4deb-96b6-7578f22ef9f4" satisfied condition "success or failure" May 15 13:31:23.903: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d8b987e5-02d7-4deb-96b6-7578f22ef9f4 container client-container: STEP: delete the pod May 15 13:31:23.931: INFO: Waiting for pod downwardapi-volume-d8b987e5-02d7-4deb-96b6-7578f22ef9f4 to disappear May 15 13:31:23.936: INFO: Pod downwardapi-volume-d8b987e5-02d7-4deb-96b6-7578f22ef9f4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:31:23.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2471" for this suite. May 15 13:31:29.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:31:30.196: INFO: namespace projected-2471 deletion completed in 6.257431736s • [SLOW TEST:10.494 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:31:30.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:31:30.369: INFO: Waiting up to 5m0s for pod "downwardapi-volume-adea5030-7046-488d-acf5-2454bb19421f" in namespace "downward-api-5744" to be "success or failure" May 15 13:31:30.402: INFO: Pod "downwardapi-volume-adea5030-7046-488d-acf5-2454bb19421f": Phase="Pending", Reason="", readiness=false. Elapsed: 33.230299ms May 15 13:31:32.407: INFO: Pod "downwardapi-volume-adea5030-7046-488d-acf5-2454bb19421f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038058692s May 15 13:31:34.412: INFO: Pod "downwardapi-volume-adea5030-7046-488d-acf5-2454bb19421f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042761408s STEP: Saw pod success May 15 13:31:34.412: INFO: Pod "downwardapi-volume-adea5030-7046-488d-acf5-2454bb19421f" satisfied condition "success or failure" May 15 13:31:34.415: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-adea5030-7046-488d-acf5-2454bb19421f container client-container: STEP: delete the pod May 15 13:31:34.454: INFO: Waiting for pod downwardapi-volume-adea5030-7046-488d-acf5-2454bb19421f to disappear May 15 13:31:34.475: INFO: Pod downwardapi-volume-adea5030-7046-488d-acf5-2454bb19421f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:31:34.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5744" for this suite. May 15 13:31:40.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:31:40.552: INFO: namespace downward-api-5744 deletion completed in 6.072681521s • [SLOW TEST:10.355 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:31:40.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:31:40.613: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12f5805e-9721-41b2-87be-69bb8a99d371" in namespace "downward-api-8244" to be "success or failure" May 15 13:31:40.625: INFO: Pod "downwardapi-volume-12f5805e-9721-41b2-87be-69bb8a99d371": Phase="Pending", Reason="", readiness=false. Elapsed: 12.03322ms May 15 13:31:42.628: INFO: Pod "downwardapi-volume-12f5805e-9721-41b2-87be-69bb8a99d371": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015224167s May 15 13:31:44.632: INFO: Pod "downwardapi-volume-12f5805e-9721-41b2-87be-69bb8a99d371": Phase="Running", Reason="", readiness=true. Elapsed: 4.019176912s May 15 13:31:46.637: INFO: Pod "downwardapi-volume-12f5805e-9721-41b2-87be-69bb8a99d371": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023956909s STEP: Saw pod success May 15 13:31:46.637: INFO: Pod "downwardapi-volume-12f5805e-9721-41b2-87be-69bb8a99d371" satisfied condition "success or failure" May 15 13:31:46.640: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-12f5805e-9721-41b2-87be-69bb8a99d371 container client-container: STEP: delete the pod May 15 13:31:46.657: INFO: Waiting for pod downwardapi-volume-12f5805e-9721-41b2-87be-69bb8a99d371 to disappear May 15 13:31:46.673: INFO: Pod downwardapi-volume-12f5805e-9721-41b2-87be-69bb8a99d371 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:31:46.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8244" for this suite. May 15 13:31:52.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:31:52.784: INFO: namespace downward-api-8244 deletion completed in 6.107104179s • [SLOW TEST:12.232 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:31:52.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9469 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9469 STEP: Creating statefulset with conflicting port in namespace statefulset-9469 STEP: Waiting until pod test-pod will start running in namespace statefulset-9469 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9469 May 15 13:31:57.015: INFO: Observed stateful pod in namespace: statefulset-9469, name: ss-0, uid: f7a3741a-bf59-4bf4-adbc-4f79941dd4e1, status phase: Pending. Waiting for statefulset controller to delete. May 15 13:36:57.015: INFO: Pod ss-0 expected to be re-created at least once [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 15 13:36:57.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-9469' May 15 13:36:57.175: INFO: stderr: "" May 15 13:36:57.176: INFO: stdout: "Name: ss-0\nNamespace: statefulset-9469\nPriority: 0\nNode: iruya-worker/\nLabels: baz=blah\n controller-revision-hash=ss-5867494796\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nControlled By: StatefulSet/ss\nContainers:\n nginx:\n Image: docker.io/library/nginx:1.14-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-s7fj4 (ro)\nVolumes:\n default-token-s7fj4:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-s7fj4\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m2s kubelet, iruya-worker Predicate PodFitsHostPorts failed\n" May 15 13:36:57.176: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-9469 Priority: 0 Node: iruya-worker/ Labels: baz=blah controller-revision-hash=ss-5867494796 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: Controlled By: StatefulSet/ss Containers: nginx: Image: docker.io/library/nginx:1.14-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-s7fj4 (ro) Volumes: default-token-s7fj4: Type: Secret (a volume populated by a Secret) SecretName: default-token-s7fj4 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning PodFitsHostPorts 5m2s kubelet, iruya-worker Predicate PodFitsHostPorts failed May 15 13:36:57.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-9469 --tail=100' May 15 13:36:57.292: INFO: rc: 1 May 15 13:36:57.292: INFO: Last 100 log lines of ss-0: May 15 13:36:57.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-9469' May 15 13:36:57.430: INFO: stderr: "" May 15 13:36:57.430: INFO: stdout: "Name: test-pod\nNamespace: statefulset-9469\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Fri, 15 May 2020 13:31:53 +0000\nLabels: \nAnnotations: \nStatus: Running\nIP: 10.244.2.76\nContainers:\n nginx:\n Container ID: containerd://957d63195d1dcd20365a9d3e5e5c24bec26fbf52aaa944bf0319c9dcc7e95fe2\n Image: docker.io/library/nginx:1.14-alpine\n Image ID: docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Fri, 15 May 2020 13:31:55 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-s7fj4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-s7fj4:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-s7fj4\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m3s kubelet, iruya-worker Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n Normal Created 5m2s kubelet, iruya-worker Created container nginx\n Normal Started 5m2s kubelet, iruya-worker Started container nginx\n" May 15 13:36:57.430: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-9469 Priority: 0 Node: iruya-worker/172.17.0.6 Start Time: Fri, 15 May 2020 13:31:53 +0000 Labels: Annotations: Status: Running IP: 10.244.2.76 Containers: nginx: Container ID: containerd://957d63195d1dcd20365a9d3e5e5c24bec26fbf52aaa944bf0319c9dcc7e95fe2 Image: docker.io/library/nginx:1.14-alpine Image ID: docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Fri, 15 May 2020 13:31:55 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-s7fj4 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-s7fj4: Type: Secret (a volume populated by a Secret) SecretName: default-token-s7fj4 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 5m3s kubelet, iruya-worker Container image "docker.io/library/nginx:1.14-alpine" already present on machine Normal Created 5m2s kubelet, iruya-worker Created container nginx Normal Started 5m2s kubelet, iruya-worker Started container nginx May 15 13:36:57.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-9469 --tail=100' May 15 13:36:57.527: INFO: stderr: "" May 15 13:36:57.527: INFO: stdout: "" May 15 13:36:57.527: INFO: Last 100 log lines of test-pod: May 15 13:36:57.527: INFO: Deleting all statefulset in ns statefulset-9469 May 15 13:36:57.530: INFO: Scaling statefulset ss to 0 May 15 13:37:07.588: INFO: Waiting for statefulset status.replicas updated to 0 May 15 13:37:07.590: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Collecting events from namespace "statefulset-9469". STEP: Found 12 events. May 15 13:37:07.603: INFO: At 2020-05-15 13:31:53 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful May 15 13:37:07.603: INFO: At 2020-05-15 13:31:53 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-9469/ss is recreating failed Pod ss-0 May 15 13:37:07.603: INFO: At 2020-05-15 13:31:53 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful May 15 13:37:07.603: INFO: At 2020-05-15 13:31:53 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 15 13:37:07.603: INFO: At 2020-05-15 13:31:53 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 15 13:37:07.603: INFO: At 2020-05-15 13:31:53 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 15 13:37:07.603: INFO: At 2020-05-15 13:31:53 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 15 13:37:07.603: INFO: At 2020-05-15 13:31:54 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 15 13:37:07.603: INFO: At 2020-05-15 13:31:54 +0000 UTC - event for test-pod: {kubelet iruya-worker} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine May 15 13:37:07.603: INFO: At 2020-05-15 13:31:55 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed May 15 13:37:07.603: INFO: At 2020-05-15 13:31:55 +0000 UTC - event for test-pod: {kubelet iruya-worker} Created: Created container nginx May 15 13:37:07.603: INFO: At 2020-05-15 13:31:55 +0000 UTC - event for test-pod: {kubelet iruya-worker} Started: Started container nginx May 15 13:37:07.604: INFO: POD NODE PHASE GRACE CONDITIONS May 15 13:37:07.605: INFO: test-pod iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:31:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:31:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:31:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:31:53 +0000 UTC }] May 15 13:37:07.605: INFO: May 15 13:37:07.609: INFO: Logging node info for node iruya-control-plane May 15 13:37:07.611: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-control-plane,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-control-plane,UID:5b69a0f9-55ac-48be-a8d0-5e04b939b798,ResourceVersion:11040354,Generation:0,CreationTimestamp:2020-03-15 18:24:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-control-plane,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-15 13:37:01 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-15 13:37:01 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-15 13:37:01 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-15 13:37:01 +0000 UTC 2020-03-15 18:25:00 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.7} {Hostname iruya-control-plane}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09f14f6f4d1640fcaab2243401c9f154,SystemUUID:7c6ca533-492e-400c-b058-c282f97a69ec,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[k8s.gcr.io/pause:3.1] 746479}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 15 13:37:07.611: INFO: Logging kubelet events for node iruya-control-plane May 15 13:37:07.613: INFO: Logging pods the kubelet thinks is on node iruya-control-plane May 15 13:37:07.620: INFO: kube-apiserver-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.620: INFO: Container kube-apiserver ready: true, restart count 0 May 15 13:37:07.620: INFO: kube-controller-manager-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.620: INFO: Container kube-controller-manager ready: true, restart count 0 May 15 13:37:07.620: INFO: kube-scheduler-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.620: INFO: Container kube-scheduler ready: true, restart count 0 May 15 13:37:07.620: INFO: etcd-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.620: INFO: Container etcd ready: true, restart count 0 May 15 13:37:07.620: INFO: kindnet-zn8sx started at 2020-03-15 18:24:40 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.620: INFO: Container kindnet-cni ready: true, restart count 0 May 15 13:37:07.620: INFO: kube-proxy-46nsr started at 2020-03-15 18:24:40 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.620: INFO: Container kube-proxy ready: true, restart count 0 May 15 13:37:07.620: INFO: local-path-provisioner-d4947b89c-72frh started at 2020-03-15 18:25:04 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.620: INFO: Container local-path-provisioner ready: true, restart count 0 W0515 13:37:07.623480 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 13:37:07.705: INFO: Latency metrics for node iruya-control-plane May 15 13:37:07.705: INFO: Logging node info for node iruya-worker May 15 13:37:07.721: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker,UID:94e58020-6063-4274-b0bd-d7c4f772701c,ResourceVersion:11040295,Generation:0,CreationTimestamp:2020-03-15 18:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-15 13:36:34 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-15 13:36:34 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-15 13:36:34 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-15 13:36:34 +0000 UTC 2020-03-15 18:25:15 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.6} {Hostname iruya-worker}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5332b21b7d0c4f35b2434f4fc8bea1cf,SystemUUID:92e1ff09-3c3c-490b-b499-0de27dc489ae,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 15 13:37:07.721: INFO: Logging kubelet events for node iruya-worker May 15 13:37:07.724: INFO: Logging pods the kubelet thinks is on node iruya-worker May 15 13:37:07.730: INFO: test-pod started at 2020-05-15 13:31:53 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.730: INFO: Container nginx ready: true, restart count 0 May 15 13:37:07.730: INFO: kube-proxy-pmz4p started at 2020-03-15 18:24:55 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.730: INFO: Container kube-proxy ready: true, restart count 0 May 15 13:37:07.730: INFO: kindnet-gwz5g started at 2020-03-15 18:24:55 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.730: INFO: Container kindnet-cni ready: true, restart count 0 W0515 13:37:07.734452 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 13:37:07.780: INFO: Latency metrics for node iruya-worker May 15 13:37:07.780: INFO: Logging node info for node iruya-worker2 May 15 13:37:07.783: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker2,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker2,UID:67dfdf76-d64a-45cb-a2a9-755b73c85644,ResourceVersion:11040286,Generation:0,CreationTimestamp:2020-03-15 18:24:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker2,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-15 13:36:30 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-15 13:36:30 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-15 13:36:30 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-15 13:36:30 +0000 UTC 2020-03-15 18:24:52 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.5} {Hostname iruya-worker2}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5fda03f0d02548b7a74f8a4b6cc8795b,SystemUUID:d8b7a3a5-76b4-4c0b-85d7-cdb97f2c8b1a,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 15 13:37:07.783: INFO: Logging kubelet events for node iruya-worker2 May 15 13:37:07.786: INFO: Logging pods the kubelet thinks is on node iruya-worker2 May 15 13:37:07.792: INFO: coredns-5d4dd4b4db-gm7vr started at 2020-03-15 18:24:52 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.792: INFO: Container coredns ready: true, restart count 0 May 15 13:37:07.792: INFO: coredns-5d4dd4b4db-6jcgz started at 2020-03-15 18:24:54 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.792: INFO: Container coredns ready: true, restart count 0 May 15 13:37:07.792: INFO: kube-proxy-vwbcj started at 2020-03-15 18:24:42 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.792: INFO: Container kube-proxy ready: true, restart count 0 May 15 13:37:07.792: INFO: kindnet-mgd8b started at 2020-03-15 18:24:43 +0000 UTC (0+1 container statuses recorded) May 15 13:37:07.792: INFO: Container kindnet-cni ready: true, restart count 0 W0515 13:37:07.795567 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 13:37:07.846: INFO: Latency metrics for node iruya-worker2 May 15 13:37:07.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9469" for this suite. May 15 13:37:29.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:37:29.968: INFO: namespace statefulset-9469 deletion completed in 22.118627015s • Failure [337.183 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 13:36:57.015: Pod ss-0 expected to be re-created at least once /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:37:29.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-98cb962b-b059-41a2-bea2-a5837d1b47d0 in namespace container-probe-6263 May 15 13:37:34.146: INFO: Started pod liveness-98cb962b-b059-41a2-bea2-a5837d1b47d0 in namespace container-probe-6263 STEP: checking the pod's current state and verifying that restartCount is present May 15 13:37:34.150: INFO: Initial restart count of pod liveness-98cb962b-b059-41a2-bea2-a5837d1b47d0 is 0 May 15 13:37:59.122: INFO: Restart count of pod container-probe-6263/liveness-98cb962b-b059-41a2-bea2-a5837d1b47d0 is now 1 (24.971963789s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:37:59.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6263" for this suite. May 15 13:38:05.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:38:05.296: INFO: namespace container-probe-6263 deletion completed in 6.153223155s • [SLOW TEST:35.327 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:38:05.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 15 13:38:06.071: INFO: Pod name wrapped-volume-race-b7de630f-ae6d-48ee-9283-6c69727a9132: Found 0 pods out of 5 May 15 13:38:11.080: INFO: Pod name wrapped-volume-race-b7de630f-ae6d-48ee-9283-6c69727a9132: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b7de630f-ae6d-48ee-9283-6c69727a9132 in namespace emptydir-wrapper-7091, will wait for the garbage collector to delete the pods May 15 13:38:25.171: INFO: Deleting ReplicationController wrapped-volume-race-b7de630f-ae6d-48ee-9283-6c69727a9132 took: 8.292216ms May 15 13:38:25.472: INFO: Terminating ReplicationController wrapped-volume-race-b7de630f-ae6d-48ee-9283-6c69727a9132 pods took: 300.357896ms STEP: Creating RC which spawns configmap-volume pods May 15 13:39:12.441: INFO: Pod name wrapped-volume-race-204cff27-f2b0-401e-a21b-80449f8ea951: Found 0 pods out of 5 May 15 13:39:17.445: INFO: Pod name wrapped-volume-race-204cff27-f2b0-401e-a21b-80449f8ea951: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-204cff27-f2b0-401e-a21b-80449f8ea951 in namespace emptydir-wrapper-7091, will wait for the garbage collector to delete the pods May 15 13:39:33.527: INFO: Deleting ReplicationController wrapped-volume-race-204cff27-f2b0-401e-a21b-80449f8ea951 took: 7.659879ms May 15 13:39:33.827: INFO: Terminating ReplicationController wrapped-volume-race-204cff27-f2b0-401e-a21b-80449f8ea951 pods took: 300.239934ms STEP: Creating RC which spawns configmap-volume pods May 15 13:40:12.665: INFO: Pod name wrapped-volume-race-55ad7d5c-ad42-44b6-9274-302f85185e7d: Found 0 pods out of 5 May 15 13:40:17.672: INFO: Pod name wrapped-volume-race-55ad7d5c-ad42-44b6-9274-302f85185e7d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-55ad7d5c-ad42-44b6-9274-302f85185e7d in namespace emptydir-wrapper-7091, will wait for the garbage collector to delete the pods May 15 13:40:33.765: INFO: Deleting ReplicationController wrapped-volume-race-55ad7d5c-ad42-44b6-9274-302f85185e7d took: 15.431365ms May 15 13:40:34.065: INFO: Terminating ReplicationController wrapped-volume-race-55ad7d5c-ad42-44b6-9274-302f85185e7d pods took: 300.384788ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:41:12.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7091" for this suite. May 15 13:41:22.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:41:23.053: INFO: namespace emptydir-wrapper-7091 deletion completed in 10.083173449s • [SLOW TEST:197.757 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:41:23.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 13:41:23.390: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:41:29.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5542" for this suite. May 15 13:42:15.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:42:15.712: INFO: namespace pods-5542 deletion completed in 46.125039155s • [SLOW TEST:52.658 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:42:15.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 13:42:19.971: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:42:20.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9157" for this suite. May 15 13:42:26.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:42:26.266: INFO: namespace container-runtime-9157 deletion completed in 6.141291438s • [SLOW TEST:10.554 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:42:26.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 15 13:42:26.297: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix235814230/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:42:26.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-39" for this suite. May 15 13:42:32.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:42:32.450: INFO: namespace kubectl-39 deletion completed in 6.083743944s • [SLOW TEST:6.184 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:42:32.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 15 13:42:32.602: INFO: Waiting up to 5m0s for pod "pod-7c3ccd93-0b5f-4b39-b681-306835f087db" in namespace "emptydir-6257" to be "success or failure" May 15 13:42:32.606: INFO: Pod "pod-7c3ccd93-0b5f-4b39-b681-306835f087db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292075ms May 15 13:42:34.611: INFO: Pod "pod-7c3ccd93-0b5f-4b39-b681-306835f087db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00919611s May 15 13:42:36.615: INFO: Pod "pod-7c3ccd93-0b5f-4b39-b681-306835f087db": Phase="Running", Reason="", readiness=true. Elapsed: 4.012798018s May 15 13:42:38.619: INFO: Pod "pod-7c3ccd93-0b5f-4b39-b681-306835f087db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017094794s STEP: Saw pod success May 15 13:42:38.619: INFO: Pod "pod-7c3ccd93-0b5f-4b39-b681-306835f087db" satisfied condition "success or failure" May 15 13:42:38.622: INFO: Trying to get logs from node iruya-worker2 pod pod-7c3ccd93-0b5f-4b39-b681-306835f087db container test-container: STEP: delete the pod May 15 13:42:38.646: INFO: Waiting for pod pod-7c3ccd93-0b5f-4b39-b681-306835f087db to disappear May 15 13:42:38.649: INFO: Pod pod-7c3ccd93-0b5f-4b39-b681-306835f087db no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:42:38.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6257" for this suite. May 15 13:42:44.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:42:44.798: INFO: namespace emptydir-6257 deletion completed in 6.145730656s • [SLOW TEST:12.347 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:42:44.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 15 13:42:49.421: INFO: Successfully updated pod "labelsupdateb69ea957-db53-473a-a3f3-678dc7963fbb" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:42:53.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3107" for this suite. May 15 13:43:15.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:43:15.586: INFO: namespace downward-api-3107 deletion completed in 22.108419326s • [SLOW TEST:30.787 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:43:15.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 15 13:43:15.665: INFO: Waiting up to 5m0s for pod "var-expansion-c147d443-106c-4f41-aeb5-82cdf85d88e3" in namespace "var-expansion-2535" to be "success or failure" May 15 13:43:15.689: INFO: Pod "var-expansion-c147d443-106c-4f41-aeb5-82cdf85d88e3": Phase="Pending", Reason="", readiness=false. Elapsed: 23.687775ms May 15 13:43:17.693: INFO: Pod "var-expansion-c147d443-106c-4f41-aeb5-82cdf85d88e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028336407s May 15 13:43:19.698: INFO: Pod "var-expansion-c147d443-106c-4f41-aeb5-82cdf85d88e3": Phase="Running", Reason="", readiness=true. Elapsed: 4.033098267s May 15 13:43:21.703: INFO: Pod "var-expansion-c147d443-106c-4f41-aeb5-82cdf85d88e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037744912s STEP: Saw pod success May 15 13:43:21.703: INFO: Pod "var-expansion-c147d443-106c-4f41-aeb5-82cdf85d88e3" satisfied condition "success or failure" May 15 13:43:21.706: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-c147d443-106c-4f41-aeb5-82cdf85d88e3 container dapi-container: STEP: delete the pod May 15 13:43:21.722: INFO: Waiting for pod var-expansion-c147d443-106c-4f41-aeb5-82cdf85d88e3 to disappear May 15 13:43:21.727: INFO: Pod var-expansion-c147d443-106c-4f41-aeb5-82cdf85d88e3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:43:21.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2535" for this suite. May 15 13:43:27.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:43:27.820: INFO: namespace var-expansion-2535 deletion completed in 6.090505515s • [SLOW TEST:12.234 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:43:27.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:43:27.897: INFO: Waiting up to 5m0s for pod "downwardapi-volume-32096e9f-11d4-4eec-9ab9-9ff86b89405b" in namespace "projected-4079" to be "success or failure" May 15 13:43:27.901: INFO: Pod "downwardapi-volume-32096e9f-11d4-4eec-9ab9-9ff86b89405b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051126ms May 15 13:43:30.020: INFO: Pod "downwardapi-volume-32096e9f-11d4-4eec-9ab9-9ff86b89405b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122939926s May 15 13:43:32.025: INFO: Pod "downwardapi-volume-32096e9f-11d4-4eec-9ab9-9ff86b89405b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127683651s STEP: Saw pod success May 15 13:43:32.025: INFO: Pod "downwardapi-volume-32096e9f-11d4-4eec-9ab9-9ff86b89405b" satisfied condition "success or failure" May 15 13:43:32.028: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-32096e9f-11d4-4eec-9ab9-9ff86b89405b container client-container: STEP: delete the pod May 15 13:43:32.076: INFO: Waiting for pod downwardapi-volume-32096e9f-11d4-4eec-9ab9-9ff86b89405b to disappear May 15 13:43:32.082: INFO: Pod downwardapi-volume-32096e9f-11d4-4eec-9ab9-9ff86b89405b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:43:32.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4079" for this suite. May 15 13:43:38.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:43:38.175: INFO: namespace projected-4079 deletion completed in 6.089752163s • [SLOW TEST:10.354 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:43:38.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:43:38.268: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e33b7ea8-9228-4f5f-93a3-f35ef3fc8440" in namespace "projected-5292" to be "success or failure" May 15 13:43:38.274: INFO: Pod "downwardapi-volume-e33b7ea8-9228-4f5f-93a3-f35ef3fc8440": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247843ms May 15 13:43:40.278: INFO: Pod "downwardapi-volume-e33b7ea8-9228-4f5f-93a3-f35ef3fc8440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010463615s May 15 13:43:42.282: INFO: Pod "downwardapi-volume-e33b7ea8-9228-4f5f-93a3-f35ef3fc8440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014137476s STEP: Saw pod success May 15 13:43:42.282: INFO: Pod "downwardapi-volume-e33b7ea8-9228-4f5f-93a3-f35ef3fc8440" satisfied condition "success or failure" May 15 13:43:42.284: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e33b7ea8-9228-4f5f-93a3-f35ef3fc8440 container client-container: STEP: delete the pod May 15 13:43:42.329: INFO: Waiting for pod downwardapi-volume-e33b7ea8-9228-4f5f-93a3-f35ef3fc8440 to disappear May 15 13:43:42.360: INFO: Pod downwardapi-volume-e33b7ea8-9228-4f5f-93a3-f35ef3fc8440 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:43:42.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5292" for this suite. May 15 13:43:48.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:43:48.435: INFO: namespace projected-5292 deletion completed in 6.071123881s • [SLOW TEST:10.260 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:43:48.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 15 13:43:49.034: INFO: created pod pod-service-account-defaultsa May 15 13:43:49.034: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 15 13:43:49.109: INFO: created pod pod-service-account-mountsa May 15 13:43:49.109: INFO: pod pod-service-account-mountsa service account token volume mount: true May 15 13:43:49.136: INFO: created pod pod-service-account-nomountsa May 15 13:43:49.136: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 15 13:43:49.159: INFO: created pod pod-service-account-defaultsa-mountspec May 15 13:43:49.159: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 15 13:43:49.245: INFO: created pod pod-service-account-mountsa-mountspec May 15 13:43:49.245: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 15 13:43:49.415: INFO: created pod pod-service-account-nomountsa-mountspec May 15 13:43:49.415: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 15 13:43:49.442: INFO: created pod pod-service-account-defaultsa-nomountspec May 15 13:43:49.442: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 15 13:43:49.511: INFO: created pod pod-service-account-mountsa-nomountspec May 15 13:43:49.511: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 15 13:43:49.594: INFO: created pod pod-service-account-nomountsa-nomountspec May 15 13:43:49.594: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:43:49.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8271" for this suite. May 15 13:44:19.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:44:19.841: INFO: namespace svcaccounts-8271 deletion completed in 30.214833803s • [SLOW TEST:31.406 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:44:19.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-be71de37-c4c0-436a-bf98-bb59a5faa572 STEP: Creating a pod to test consume secrets May 15 13:44:19.928: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7b2c5f1b-9712-45dc-a707-f55515daa468" in namespace "projected-1516" to be "success or failure" May 15 13:44:19.932: INFO: Pod "pod-projected-secrets-7b2c5f1b-9712-45dc-a707-f55515daa468": Phase="Pending", Reason="", readiness=false. Elapsed: 3.864197ms May 15 13:44:21.996: INFO: Pod "pod-projected-secrets-7b2c5f1b-9712-45dc-a707-f55515daa468": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068073425s May 15 13:44:24.001: INFO: Pod "pod-projected-secrets-7b2c5f1b-9712-45dc-a707-f55515daa468": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072880915s STEP: Saw pod success May 15 13:44:24.001: INFO: Pod "pod-projected-secrets-7b2c5f1b-9712-45dc-a707-f55515daa468" satisfied condition "success or failure" May 15 13:44:24.005: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-7b2c5f1b-9712-45dc-a707-f55515daa468 container projected-secret-volume-test: STEP: delete the pod May 15 13:44:24.045: INFO: Waiting for pod pod-projected-secrets-7b2c5f1b-9712-45dc-a707-f55515daa468 to disappear May 15 13:44:24.058: INFO: Pod pod-projected-secrets-7b2c5f1b-9712-45dc-a707-f55515daa468 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:44:24.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1516" for this suite. May 15 13:44:30.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:44:30.150: INFO: namespace projected-1516 deletion completed in 6.069910695s • [SLOW TEST:10.308 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:44:30.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 15 13:44:30.334: INFO: namespace kubectl-7890 May 15 13:44:30.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7890' May 15 13:44:33.494: INFO: stderr: "" May 15 13:44:33.494: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 15 13:44:34.547: INFO: Selector matched 1 pods for map[app:redis] May 15 13:44:34.547: INFO: Found 0 / 1 May 15 13:44:35.499: INFO: Selector matched 1 pods for map[app:redis] May 15 13:44:35.499: INFO: Found 0 / 1 May 15 13:44:36.499: INFO: Selector matched 1 pods for map[app:redis] May 15 13:44:36.499: INFO: Found 1 / 1 May 15 13:44:36.499: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 15 13:44:36.502: INFO: Selector matched 1 pods for map[app:redis] May 15 13:44:36.502: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 15 13:44:36.502: INFO: wait on redis-master startup in kubectl-7890 May 15 13:44:36.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qk994 redis-master --namespace=kubectl-7890' May 15 13:44:36.610: INFO: stderr: "" May 15 13:44:36.610: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 May 13:44:36.290 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 May 13:44:36.290 # Server started, Redis version 3.2.12\n1:M 15 May 13:44:36.290 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 May 13:44:36.290 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 15 13:44:36.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7890' May 15 13:44:36.768: INFO: stderr: "" May 15 13:44:36.768: INFO: stdout: "service/rm2 exposed\n" May 15 13:44:36.778: INFO: Service rm2 in namespace kubectl-7890 found. STEP: exposing service May 15 13:44:38.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7890' May 15 13:44:38.945: INFO: stderr: "" May 15 13:44:38.945: INFO: stdout: "service/rm3 exposed\n" May 15 13:44:38.959: INFO: Service rm3 in namespace kubectl-7890 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:44:41.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7890" for this suite. May 15 13:45:03.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:45:03.112: INFO: namespace kubectl-7890 deletion completed in 22.107748134s • [SLOW TEST:32.962 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:45:03.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 15 13:45:03.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 15 13:45:03.499: INFO: stderr: "" May 15 13:45:03.499: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:45:03.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7384" for this suite. May 15 13:45:09.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:45:09.666: INFO: namespace kubectl-7384 deletion completed in 6.162539374s • [SLOW TEST:6.552 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:45:09.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 15 13:45:17.840: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 13:45:17.850: INFO: Pod pod-with-prestop-http-hook still exists May 15 13:45:19.850: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 13:45:19.854: INFO: Pod pod-with-prestop-http-hook still exists May 15 13:45:21.850: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 13:45:21.854: INFO: Pod pod-with-prestop-http-hook still exists May 15 13:45:23.850: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 13:45:23.854: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:45:23.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9947" for this suite. May 15 13:45:45.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:45:45.960: INFO: namespace container-lifecycle-hook-9947 deletion completed in 22.096184686s • [SLOW TEST:36.294 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:45:45.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-e709cdef-e378-4e89-a895-6498f2ac92c6 STEP: Creating a pod to test consume configMaps May 15 13:45:46.234: INFO: Waiting up to 5m0s for pod "pod-configmaps-42e2d624-1e68-4193-8dfa-8b70a30996f4" in namespace "configmap-2366" to be "success or failure" May 15 13:45:46.251: INFO: Pod "pod-configmaps-42e2d624-1e68-4193-8dfa-8b70a30996f4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.608478ms May 15 13:45:48.254: INFO: Pod "pod-configmaps-42e2d624-1e68-4193-8dfa-8b70a30996f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019551191s May 15 13:45:50.257: INFO: Pod "pod-configmaps-42e2d624-1e68-4193-8dfa-8b70a30996f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023447028s STEP: Saw pod success May 15 13:45:50.257: INFO: Pod "pod-configmaps-42e2d624-1e68-4193-8dfa-8b70a30996f4" satisfied condition "success or failure" May 15 13:45:50.260: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-42e2d624-1e68-4193-8dfa-8b70a30996f4 container configmap-volume-test: STEP: delete the pod May 15 13:45:50.283: INFO: Waiting for pod pod-configmaps-42e2d624-1e68-4193-8dfa-8b70a30996f4 to disappear May 15 13:45:50.306: INFO: Pod pod-configmaps-42e2d624-1e68-4193-8dfa-8b70a30996f4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:45:50.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2366" for this suite. May 15 13:45:56.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:45:56.477: INFO: namespace configmap-2366 deletion completed in 6.166641216s • [SLOW TEST:10.516 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:45:56.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 15 13:45:56.536: INFO: Waiting up to 5m0s for pod "client-containers-c2952381-b369-445c-8382-65854cb1fb5c" in namespace "containers-3721" to be "success or failure" May 15 13:45:56.572: INFO: Pod "client-containers-c2952381-b369-445c-8382-65854cb1fb5c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.068622ms May 15 13:45:58.770: INFO: Pod "client-containers-c2952381-b369-445c-8382-65854cb1fb5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23469452s May 15 13:46:00.774: INFO: Pod "client-containers-c2952381-b369-445c-8382-65854cb1fb5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.238675762s STEP: Saw pod success May 15 13:46:00.774: INFO: Pod "client-containers-c2952381-b369-445c-8382-65854cb1fb5c" satisfied condition "success or failure" May 15 13:46:00.778: INFO: Trying to get logs from node iruya-worker pod client-containers-c2952381-b369-445c-8382-65854cb1fb5c container test-container: STEP: delete the pod May 15 13:46:00.859: INFO: Waiting for pod client-containers-c2952381-b369-445c-8382-65854cb1fb5c to disappear May 15 13:46:00.899: INFO: Pod client-containers-c2952381-b369-445c-8382-65854cb1fb5c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:46:00.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3721" for this suite. May 15 13:46:07.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:46:07.077: INFO: namespace containers-3721 deletion completed in 6.174180295s • [SLOW TEST:10.600 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:46:07.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-8d460b05-9fb7-4841-9718-5f784c1ad759 STEP: Creating a pod to test consume secrets May 15 13:46:07.165: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5feb3558-bdea-4fe8-8439-9894c4507d03" in namespace "projected-7029" to be "success or failure" May 15 13:46:07.225: INFO: Pod "pod-projected-secrets-5feb3558-bdea-4fe8-8439-9894c4507d03": Phase="Pending", Reason="", readiness=false. Elapsed: 60.094848ms May 15 13:46:09.228: INFO: Pod "pod-projected-secrets-5feb3558-bdea-4fe8-8439-9894c4507d03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063352241s May 15 13:46:11.232: INFO: Pod "pod-projected-secrets-5feb3558-bdea-4fe8-8439-9894c4507d03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06727759s STEP: Saw pod success May 15 13:46:11.232: INFO: Pod "pod-projected-secrets-5feb3558-bdea-4fe8-8439-9894c4507d03" satisfied condition "success or failure" May 15 13:46:11.235: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-5feb3558-bdea-4fe8-8439-9894c4507d03 container projected-secret-volume-test: STEP: delete the pod May 15 13:46:11.340: INFO: Waiting for pod pod-projected-secrets-5feb3558-bdea-4fe8-8439-9894c4507d03 to disappear May 15 13:46:12.133: INFO: Pod pod-projected-secrets-5feb3558-bdea-4fe8-8439-9894c4507d03 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:46:12.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7029" for this suite. May 15 13:46:18.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:46:18.292: INFO: namespace projected-7029 deletion completed in 6.145919556s • [SLOW TEST:11.213 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:46:18.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 15 13:46:18.350: INFO: Waiting up to 5m0s for pod "downward-api-b36c2f03-061e-49e3-bf02-c982f437f1f9" in namespace "downward-api-312" to be "success or failure" May 15 13:46:18.354: INFO: Pod "downward-api-b36c2f03-061e-49e3-bf02-c982f437f1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.977725ms May 15 13:46:20.381: INFO: Pod "downward-api-b36c2f03-061e-49e3-bf02-c982f437f1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031419821s May 15 13:46:22.724: INFO: Pod "downward-api-b36c2f03-061e-49e3-bf02-c982f437f1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.373610612s May 15 13:46:24.730: INFO: Pod "downward-api-b36c2f03-061e-49e3-bf02-c982f437f1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.379879202s May 15 13:46:26.746: INFO: Pod "downward-api-b36c2f03-061e-49e3-bf02-c982f437f1f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.395564634s STEP: Saw pod success May 15 13:46:26.746: INFO: Pod "downward-api-b36c2f03-061e-49e3-bf02-c982f437f1f9" satisfied condition "success or failure" May 15 13:46:26.747: INFO: Trying to get logs from node iruya-worker pod downward-api-b36c2f03-061e-49e3-bf02-c982f437f1f9 container dapi-container: STEP: delete the pod May 15 13:46:27.698: INFO: Waiting for pod downward-api-b36c2f03-061e-49e3-bf02-c982f437f1f9 to disappear May 15 13:46:27.929: INFO: Pod downward-api-b36c2f03-061e-49e3-bf02-c982f437f1f9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:46:27.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-312" for this suite. May 15 13:46:36.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:46:36.133: INFO: namespace downward-api-312 deletion completed in 8.200735354s • [SLOW TEST:17.841 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:46:36.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 15 13:46:40.313: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d22e4548-33fd-4e96-91d1-3190fb9d4310,GenerateName:,Namespace:events-6505,SelfLink:/api/v1/namespaces/events-6505/pods/send-events-d22e4548-33fd-4e96-91d1-3190fb9d4310,UID:56baf05f-21d7-4c61-b6ff-0a5d40b515d9,ResourceVersion:11042872,Generation:0,CreationTimestamp:2020-05-15 13:46:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 212533141,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nzqkp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nzqkp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-nzqkp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033104a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003310630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:46:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:46:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:46:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:46:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.14,StartTime:2020-05-15 13:46:36 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-15 13:46:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://cf9e62fd8f5f80939eec837d2c21756a67b546602ad58bb78e00899440ccb9c0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 15 13:46:42.317: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 15 13:46:44.321: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:46:44.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6505" for this suite. May 15 13:47:22.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:47:22.482: INFO: namespace events-6505 deletion completed in 38.110783547s • [SLOW TEST:46.349 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:47:22.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 15 13:47:22.565: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-a,UID:778f8e7d-f91a-4a81-b5e6-a92594da24c0,ResourceVersion:11042971,Generation:0,CreationTimestamp:2020-05-15 13:47:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 15 13:47:22.565: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-a,UID:778f8e7d-f91a-4a81-b5e6-a92594da24c0,ResourceVersion:11042971,Generation:0,CreationTimestamp:2020-05-15 13:47:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 15 13:47:32.576: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-a,UID:778f8e7d-f91a-4a81-b5e6-a92594da24c0,ResourceVersion:11042992,Generation:0,CreationTimestamp:2020-05-15 13:47:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 15 13:47:32.576: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-a,UID:778f8e7d-f91a-4a81-b5e6-a92594da24c0,ResourceVersion:11042992,Generation:0,CreationTimestamp:2020-05-15 13:47:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 15 13:47:42.582: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-a,UID:778f8e7d-f91a-4a81-b5e6-a92594da24c0,ResourceVersion:11043013,Generation:0,CreationTimestamp:2020-05-15 13:47:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 15 13:47:42.582: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-a,UID:778f8e7d-f91a-4a81-b5e6-a92594da24c0,ResourceVersion:11043013,Generation:0,CreationTimestamp:2020-05-15 13:47:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 15 13:47:52.588: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-a,UID:778f8e7d-f91a-4a81-b5e6-a92594da24c0,ResourceVersion:11043033,Generation:0,CreationTimestamp:2020-05-15 13:47:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 15 13:47:52.588: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-a,UID:778f8e7d-f91a-4a81-b5e6-a92594da24c0,ResourceVersion:11043033,Generation:0,CreationTimestamp:2020-05-15 13:47:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 15 13:48:02.596: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-b,UID:7af7b049-b2bf-41bd-9059-3204d39f44e1,ResourceVersion:11043054,Generation:0,CreationTimestamp:2020-05-15 13:48:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 15 13:48:02.596: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-b,UID:7af7b049-b2bf-41bd-9059-3204d39f44e1,ResourceVersion:11043054,Generation:0,CreationTimestamp:2020-05-15 13:48:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 15 13:48:12.606: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-b,UID:7af7b049-b2bf-41bd-9059-3204d39f44e1,ResourceVersion:11043074,Generation:0,CreationTimestamp:2020-05-15 13:48:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 15 13:48:12.606: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2721,SelfLink:/api/v1/namespaces/watch-2721/configmaps/e2e-watch-test-configmap-b,UID:7af7b049-b2bf-41bd-9059-3204d39f44e1,ResourceVersion:11043074,Generation:0,CreationTimestamp:2020-05-15 13:48:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:48:22.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2721" for this suite. May 15 13:48:28.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:48:28.705: INFO: namespace watch-2721 deletion completed in 6.090877017s • [SLOW TEST:66.223 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:48:28.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 15 13:48:28.864: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 15 13:48:28.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2171' May 15 13:48:29.160: INFO: stderr: "" May 15 13:48:29.160: INFO: stdout: "service/redis-slave created\n" May 15 13:48:29.160: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 15 13:48:29.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2171' May 15 13:48:29.493: INFO: stderr: "" May 15 13:48:29.493: INFO: stdout: "service/redis-master created\n" May 15 13:48:29.493: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 15 13:48:29.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2171' May 15 13:48:29.777: INFO: stderr: "" May 15 13:48:29.777: INFO: stdout: "service/frontend created\n" May 15 13:48:29.777: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 15 13:48:29.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2171' May 15 13:48:30.031: INFO: stderr: "" May 15 13:48:30.031: INFO: stdout: "deployment.apps/frontend created\n" May 15 13:48:30.031: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 15 13:48:30.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2171' May 15 13:48:30.344: INFO: stderr: "" May 15 13:48:30.344: INFO: stdout: "deployment.apps/redis-master created\n" May 15 13:48:30.344: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 15 13:48:30.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2171' May 15 13:48:30.594: INFO: stderr: "" May 15 13:48:30.594: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 15 13:48:30.594: INFO: Waiting for all frontend pods to be Running. May 15 13:48:40.644: INFO: Waiting for frontend to serve content. May 15 13:48:40.662: INFO: Trying to add a new entry to the guestbook. May 15 13:48:40.719: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 15 13:48:40.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2171' May 15 13:48:40.877: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 13:48:40.877: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 15 13:48:40.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2171' May 15 13:48:41.029: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 13:48:41.029: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 15 13:48:41.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2171' May 15 13:48:41.165: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 13:48:41.165: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 15 13:48:41.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2171' May 15 13:48:41.260: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 13:48:41.260: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 15 13:48:41.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2171' May 15 13:48:41.350: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 13:48:41.350: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 15 13:48:41.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2171' May 15 13:48:41.496: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 13:48:41.496: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:48:41.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2171" for this suite. May 15 13:49:21.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:49:21.664: INFO: namespace kubectl-2171 deletion completed in 40.154679966s • [SLOW TEST:52.959 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:49:21.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 15 13:49:21.732: INFO: Waiting up to 5m0s for pod "var-expansion-218acfd8-9703-4c37-b46b-5e7ea76170a2" in namespace "var-expansion-7121" to be "success or failure" May 15 13:49:21.736: INFO: Pod "var-expansion-218acfd8-9703-4c37-b46b-5e7ea76170a2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.18851ms May 15 13:49:23.982: INFO: Pod "var-expansion-218acfd8-9703-4c37-b46b-5e7ea76170a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249738529s May 15 13:49:26.030: INFO: Pod "var-expansion-218acfd8-9703-4c37-b46b-5e7ea76170a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.297604626s STEP: Saw pod success May 15 13:49:26.030: INFO: Pod "var-expansion-218acfd8-9703-4c37-b46b-5e7ea76170a2" satisfied condition "success or failure" May 15 13:49:26.033: INFO: Trying to get logs from node iruya-worker pod var-expansion-218acfd8-9703-4c37-b46b-5e7ea76170a2 container dapi-container: STEP: delete the pod May 15 13:49:26.067: INFO: Waiting for pod var-expansion-218acfd8-9703-4c37-b46b-5e7ea76170a2 to disappear May 15 13:49:26.107: INFO: Pod var-expansion-218acfd8-9703-4c37-b46b-5e7ea76170a2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:49:26.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7121" for this suite. May 15 13:49:32.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:49:32.324: INFO: namespace var-expansion-7121 deletion completed in 6.213356184s • [SLOW TEST:10.659 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:49:32.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 13:49:32.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33c5a3d0-72b1-4aa5-917a-d03bbba4c26b" in namespace "downward-api-4960" to be "success or failure" May 15 13:49:32.401: INFO: Pod "downwardapi-volume-33c5a3d0-72b1-4aa5-917a-d03bbba4c26b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.887167ms May 15 13:49:34.406: INFO: Pod "downwardapi-volume-33c5a3d0-72b1-4aa5-917a-d03bbba4c26b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019620218s May 15 13:49:36.409: INFO: Pod "downwardapi-volume-33c5a3d0-72b1-4aa5-917a-d03bbba4c26b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023208895s STEP: Saw pod success May 15 13:49:36.410: INFO: Pod "downwardapi-volume-33c5a3d0-72b1-4aa5-917a-d03bbba4c26b" satisfied condition "success or failure" May 15 13:49:36.412: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-33c5a3d0-72b1-4aa5-917a-d03bbba4c26b container client-container: STEP: delete the pod May 15 13:49:36.439: INFO: Waiting for pod downwardapi-volume-33c5a3d0-72b1-4aa5-917a-d03bbba4c26b to disappear May 15 13:49:36.456: INFO: Pod downwardapi-volume-33c5a3d0-72b1-4aa5-917a-d03bbba4c26b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:49:36.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4960" for this suite. May 15 13:49:42.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:49:42.583: INFO: namespace downward-api-4960 deletion completed in 6.12406733s • [SLOW TEST:10.258 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:49:42.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-79a19622-79b7-409b-bc31-c205c85f3730 in namespace container-probe-3925 May 15 13:49:46.706: INFO: Started pod busybox-79a19622-79b7-409b-bc31-c205c85f3730 in namespace container-probe-3925 STEP: checking the pod's current state and verifying that restartCount is present May 15 13:49:46.710: INFO: Initial restart count of pod busybox-79a19622-79b7-409b-bc31-c205c85f3730 is 0 May 15 13:50:34.831: INFO: Restart count of pod container-probe-3925/busybox-79a19622-79b7-409b-bc31-c205c85f3730 is now 1 (48.121281393s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:50:34.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3925" for this suite. May 15 13:50:40.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:50:41.027: INFO: namespace container-probe-3925 deletion completed in 6.112516239s • [SLOW TEST:58.443 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:50:41.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9380 STEP: creating a selector STEP: Creating the service pods in kubernetes May 15 13:50:41.094: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 15 13:51:09.416: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.19 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9380 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 13:51:09.417: INFO: >>> kubeConfig: /root/.kube/config I0515 13:51:09.446056 6 log.go:172] (0xc003182b00) (0xc003187ea0) Create stream I0515 13:51:09.446087 6 log.go:172] (0xc003182b00) (0xc003187ea0) Stream added, broadcasting: 1 I0515 13:51:09.447951 6 log.go:172] (0xc003182b00) Reply frame received for 1 I0515 13:51:09.447983 6 log.go:172] (0xc003182b00) (0xc0011bad20) Create stream I0515 13:51:09.447993 6 log.go:172] (0xc003182b00) (0xc0011bad20) Stream added, broadcasting: 3 I0515 13:51:09.448948 6 log.go:172] (0xc003182b00) Reply frame received for 3 I0515 13:51:09.448984 6 log.go:172] (0xc003182b00) (0xc0011bae60) Create stream I0515 13:51:09.448995 6 log.go:172] (0xc003182b00) (0xc0011bae60) Stream added, broadcasting: 5 I0515 13:51:09.450160 6 log.go:172] (0xc003182b00) Reply frame received for 5 I0515 13:51:10.528314 6 log.go:172] (0xc003182b00) Data frame received for 3 I0515 13:51:10.528439 6 log.go:172] (0xc0011bad20) (3) Data frame handling I0515 13:51:10.528516 6 log.go:172] (0xc0011bad20) (3) Data frame sent I0515 13:51:10.529624 6 log.go:172] (0xc003182b00) Data frame received for 3 I0515 13:51:10.529690 6 log.go:172] (0xc003182b00) Data frame received for 5 I0515 13:51:10.529776 6 log.go:172] (0xc0011bae60) (5) Data frame handling I0515 13:51:10.529831 6 log.go:172] (0xc0011bad20) (3) Data frame handling I0515 13:51:10.531733 6 log.go:172] (0xc003182b00) Data frame received for 1 I0515 13:51:10.531766 6 log.go:172] (0xc003187ea0) (1) Data frame handling I0515 13:51:10.531791 6 log.go:172] (0xc003187ea0) (1) Data frame sent I0515 13:51:10.531817 6 log.go:172] (0xc003182b00) (0xc003187ea0) Stream removed, broadcasting: 1 I0515 13:51:10.531969 6 log.go:172] (0xc003182b00) (0xc003187ea0) Stream removed, broadcasting: 1 I0515 13:51:10.532002 6 log.go:172] (0xc003182b00) (0xc0011bad20) Stream removed, broadcasting: 3 I0515 13:51:10.532022 6 log.go:172] (0xc003182b00) (0xc0011bae60) Stream removed, broadcasting: 5 I0515 13:51:10.532072 6 log.go:172] (0xc003182b00) Go away received May 15 13:51:10.532: INFO: Found all expected endpoints: [netserver-0] May 15 13:51:10.535: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.109 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9380 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 13:51:10.535: INFO: >>> kubeConfig: /root/.kube/config I0515 13:51:10.568849 6 log.go:172] (0xc0013fe630) (0xc000238320) Create stream I0515 13:51:10.568877 6 log.go:172] (0xc0013fe630) (0xc000238320) Stream added, broadcasting: 1 I0515 13:51:10.571028 6 log.go:172] (0xc0013fe630) Reply frame received for 1 I0515 13:51:10.571097 6 log.go:172] (0xc0013fe630) (0xc003187f40) Create stream I0515 13:51:10.571115 6 log.go:172] (0xc0013fe630) (0xc003187f40) Stream added, broadcasting: 3 I0515 13:51:10.572263 6 log.go:172] (0xc0013fe630) Reply frame received for 3 I0515 13:51:10.572288 6 log.go:172] (0xc0013fe630) (0xc0011bb680) Create stream I0515 13:51:10.572311 6 log.go:172] (0xc0013fe630) (0xc0011bb680) Stream added, broadcasting: 5 I0515 13:51:10.573600 6 log.go:172] (0xc0013fe630) Reply frame received for 5 I0515 13:51:11.658049 6 log.go:172] (0xc0013fe630) Data frame received for 5 I0515 13:51:11.658111 6 log.go:172] (0xc0011bb680) (5) Data frame handling I0515 13:51:11.658142 6 log.go:172] (0xc0013fe630) Data frame received for 3 I0515 13:51:11.658154 6 log.go:172] (0xc003187f40) (3) Data frame handling I0515 13:51:11.658170 6 log.go:172] (0xc003187f40) (3) Data frame sent I0515 13:51:11.659201 6 log.go:172] (0xc0013fe630) Data frame received for 3 I0515 13:51:11.659230 6 log.go:172] (0xc003187f40) (3) Data frame handling I0515 13:51:11.660140 6 log.go:172] (0xc0013fe630) Data frame received for 1 I0515 13:51:11.660165 6 log.go:172] (0xc000238320) (1) Data frame handling I0515 13:51:11.660185 6 log.go:172] (0xc000238320) (1) Data frame sent I0515 13:51:11.660217 6 log.go:172] (0xc0013fe630) (0xc000238320) Stream removed, broadcasting: 1 I0515 13:51:11.660256 6 log.go:172] (0xc0013fe630) Go away received I0515 13:51:11.660365 6 log.go:172] (0xc0013fe630) (0xc000238320) Stream removed, broadcasting: 1 I0515 13:51:11.660390 6 log.go:172] (0xc0013fe630) (0xc003187f40) Stream removed, broadcasting: 3 I0515 13:51:11.660404 6 log.go:172] (0xc0013fe630) (0xc0011bb680) Stream removed, broadcasting: 5 May 15 13:51:11.660: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:51:11.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9380" for this suite. May 15 13:51:35.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:51:35.872: INFO: namespace pod-network-test-9380 deletion completed in 24.143180563s • [SLOW TEST:54.845 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:51:35.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2858 STEP: creating a selector STEP: Creating the service pods in kubernetes May 15 13:51:35.911: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 15 13:52:02.050: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.21:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2858 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 13:52:02.050: INFO: >>> kubeConfig: /root/.kube/config I0515 13:52:02.078338 6 log.go:172] (0xc003182580) (0xc0002395e0) Create stream I0515 13:52:02.078365 6 log.go:172] (0xc003182580) (0xc0002395e0) Stream added, broadcasting: 1 I0515 13:52:02.080005 6 log.go:172] (0xc003182580) Reply frame received for 1 I0515 13:52:02.080054 6 log.go:172] (0xc003182580) (0xc001c245a0) Create stream I0515 13:52:02.080075 6 log.go:172] (0xc003182580) (0xc001c245a0) Stream added, broadcasting: 3 I0515 13:52:02.080900 6 log.go:172] (0xc003182580) Reply frame received for 3 I0515 13:52:02.080933 6 log.go:172] (0xc003182580) (0xc000239860) Create stream I0515 13:52:02.080943 6 log.go:172] (0xc003182580) (0xc000239860) Stream added, broadcasting: 5 I0515 13:52:02.082296 6 log.go:172] (0xc003182580) Reply frame received for 5 I0515 13:52:02.154795 6 log.go:172] (0xc003182580) Data frame received for 3 I0515 13:52:02.154830 6 log.go:172] (0xc001c245a0) (3) Data frame handling I0515 13:52:02.154856 6 log.go:172] (0xc001c245a0) (3) Data frame sent I0515 13:52:02.158273 6 log.go:172] (0xc003182580) Data frame received for 3 I0515 13:52:02.158301 6 log.go:172] (0xc001c245a0) (3) Data frame handling I0515 13:52:02.158410 6 log.go:172] (0xc003182580) Data frame received for 5 I0515 13:52:02.158427 6 log.go:172] (0xc000239860) (5) Data frame handling I0515 13:52:02.160354 6 log.go:172] (0xc003182580) Data frame received for 1 I0515 13:52:02.160373 6 log.go:172] (0xc0002395e0) (1) Data frame handling I0515 13:52:02.160383 6 log.go:172] (0xc0002395e0) (1) Data frame sent I0515 13:52:02.160393 6 log.go:172] (0xc003182580) (0xc0002395e0) Stream removed, broadcasting: 1 I0515 13:52:02.160410 6 log.go:172] (0xc003182580) Go away received I0515 13:52:02.160477 6 log.go:172] (0xc003182580) (0xc0002395e0) Stream removed, broadcasting: 1 I0515 13:52:02.160495 6 log.go:172] (0xc003182580) (0xc001c245a0) Stream removed, broadcasting: 3 I0515 13:52:02.160502 6 log.go:172] (0xc003182580) (0xc000239860) Stream removed, broadcasting: 5 May 15 13:52:02.160: INFO: Found all expected endpoints: [netserver-0] May 15 13:52:02.163: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.110:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2858 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 13:52:02.163: INFO: >>> kubeConfig: /root/.kube/config I0515 13:52:02.185091 6 log.go:172] (0xc000fd66e0) (0xc0019eb2c0) Create stream I0515 13:52:02.185258 6 log.go:172] (0xc000fd66e0) (0xc0019eb2c0) Stream added, broadcasting: 1 I0515 13:52:02.186907 6 log.go:172] (0xc000fd66e0) Reply frame received for 1 I0515 13:52:02.186965 6 log.go:172] (0xc000fd66e0) (0xc0019eb400) Create stream I0515 13:52:02.186984 6 log.go:172] (0xc000fd66e0) (0xc0019eb400) Stream added, broadcasting: 3 I0515 13:52:02.187931 6 log.go:172] (0xc000fd66e0) Reply frame received for 3 I0515 13:52:02.187966 6 log.go:172] (0xc000fd66e0) (0xc001c24640) Create stream I0515 13:52:02.187977 6 log.go:172] (0xc000fd66e0) (0xc001c24640) Stream added, broadcasting: 5 I0515 13:52:02.188788 6 log.go:172] (0xc000fd66e0) Reply frame received for 5 I0515 13:52:02.262908 6 log.go:172] (0xc000fd66e0) Data frame received for 3 I0515 13:52:02.262940 6 log.go:172] (0xc0019eb400) (3) Data frame handling I0515 13:52:02.262958 6 log.go:172] (0xc0019eb400) (3) Data frame sent I0515 13:52:02.263179 6 log.go:172] (0xc000fd66e0) Data frame received for 3 I0515 13:52:02.263225 6 log.go:172] (0xc0019eb400) (3) Data frame handling I0515 13:52:02.263270 6 log.go:172] (0xc000fd66e0) Data frame received for 5 I0515 13:52:02.263308 6 log.go:172] (0xc001c24640) (5) Data frame handling I0515 13:52:02.264546 6 log.go:172] (0xc000fd66e0) Data frame received for 1 I0515 13:52:02.264568 6 log.go:172] (0xc0019eb2c0) (1) Data frame handling I0515 13:52:02.264583 6 log.go:172] (0xc0019eb2c0) (1) Data frame sent I0515 13:52:02.264671 6 log.go:172] (0xc000fd66e0) (0xc0019eb2c0) Stream removed, broadcasting: 1 I0515 13:52:02.264709 6 log.go:172] (0xc000fd66e0) Go away received I0515 13:52:02.264786 6 log.go:172] (0xc000fd66e0) (0xc0019eb2c0) Stream removed, broadcasting: 1 I0515 13:52:02.264807 6 log.go:172] (0xc000fd66e0) (0xc0019eb400) Stream removed, broadcasting: 3 I0515 13:52:02.264814 6 log.go:172] (0xc000fd66e0) (0xc001c24640) Stream removed, broadcasting: 5 May 15 13:52:02.264: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:52:02.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2858" for this suite. May 15 13:52:26.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:52:26.378: INFO: namespace pod-network-test-2858 deletion completed in 24.109290706s • [SLOW TEST:50.505 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:52:26.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-6039cb1f-a953-44fd-a9dd-fc6d6fd9f6da STEP: Creating a pod to test consume configMaps May 15 13:52:26.489: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3071eea8-0aa6-4a23-a11c-183cabd8ddc6" in namespace "projected-8073" to be "success or failure" May 15 13:52:26.492: INFO: Pod "pod-projected-configmaps-3071eea8-0aa6-4a23-a11c-183cabd8ddc6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.228751ms May 15 13:52:29.144: INFO: Pod "pod-projected-configmaps-3071eea8-0aa6-4a23-a11c-183cabd8ddc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.654997975s May 15 13:52:31.147: INFO: Pod "pod-projected-configmaps-3071eea8-0aa6-4a23-a11c-183cabd8ddc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.658723671s STEP: Saw pod success May 15 13:52:31.147: INFO: Pod "pod-projected-configmaps-3071eea8-0aa6-4a23-a11c-183cabd8ddc6" satisfied condition "success or failure" May 15 13:52:31.150: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-3071eea8-0aa6-4a23-a11c-183cabd8ddc6 container projected-configmap-volume-test: STEP: delete the pod May 15 13:52:31.227: INFO: Waiting for pod pod-projected-configmaps-3071eea8-0aa6-4a23-a11c-183cabd8ddc6 to disappear May 15 13:52:31.252: INFO: Pod pod-projected-configmaps-3071eea8-0aa6-4a23-a11c-183cabd8ddc6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:52:31.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8073" for this suite. May 15 13:52:37.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:52:37.344: INFO: namespace projected-8073 deletion completed in 6.088938651s • [SLOW TEST:10.965 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:52:37.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-68124406-f54a-49be-86a6-dc66c7f376da in namespace container-probe-5463 May 15 13:52:41.452: INFO: Started pod busybox-68124406-f54a-49be-86a6-dc66c7f376da in namespace container-probe-5463 STEP: checking the pod's current state and verifying that restartCount is present May 15 13:52:41.455: INFO: Initial restart count of pod busybox-68124406-f54a-49be-86a6-dc66c7f376da is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:56:42.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5463" for this suite. May 15 13:56:48.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:56:48.506: INFO: namespace container-probe-5463 deletion completed in 6.115909525s • [SLOW TEST:251.162 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:56:48.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0515 13:56:58.594785 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 13:56:58.594: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:56:58.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6358" for this suite. May 15 13:57:04.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:57:04.698: INFO: namespace gc-6358 deletion completed in 6.101554532s • [SLOW TEST:16.193 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:57:04.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-1585c551-470b-498c-acad-158936900249 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:57:04.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-948" for this suite. May 15 13:57:10.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:57:10.965: INFO: namespace configmap-948 deletion completed in 6.181236571s • [SLOW TEST:6.266 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:57:10.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 13:57:11.069: INFO: Creating deployment "test-recreate-deployment" May 15 13:57:11.119: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 15 13:57:11.199: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 15 13:57:13.206: INFO: Waiting deployment "test-recreate-deployment" to complete May 15 13:57:13.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725147831, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725147831, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725147831, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725147831, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 13:57:15.211: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 15 13:57:15.217: INFO: Updating deployment test-recreate-deployment May 15 13:57:15.217: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 15 13:57:15.887: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-1050,SelfLink:/apis/apps/v1/namespaces/deployment-1050/deployments/test-recreate-deployment,UID:5fcd6402-e6ab-4854-997c-9f34c6fc75e9,ResourceVersion:11044688,Generation:2,CreationTimestamp:2020-05-15 13:57:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-15 13:57:15 +0000 UTC 2020-05-15 13:57:15 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-15 13:57:15 +0000 UTC 2020-05-15 13:57:11 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 15 13:57:15.927: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-1050,SelfLink:/apis/apps/v1/namespaces/deployment-1050/replicasets/test-recreate-deployment-5c8c9cc69d,UID:d9ab629f-935b-4589-83e1-a972c2bbdb6c,ResourceVersion:11044686,Generation:1,CreationTimestamp:2020-05-15 13:57:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 5fcd6402-e6ab-4854-997c-9f34c6fc75e9 0xc0026d7267 0xc0026d7268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 15 13:57:15.927: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 15 13:57:15.927: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-1050,SelfLink:/apis/apps/v1/namespaces/deployment-1050/replicasets/test-recreate-deployment-6df85df6b9,UID:8dfa2bdd-4b96-48f2-9b2e-b7b3b33b3a38,ResourceVersion:11044677,Generation:2,CreationTimestamp:2020-05-15 13:57:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 5fcd6402-e6ab-4854-997c-9f34c6fc75e9 0xc0026d7337 0xc0026d7338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 15 13:57:15.934: INFO: Pod "test-recreate-deployment-5c8c9cc69d-6d8bb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-6d8bb,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-1050,SelfLink:/api/v1/namespaces/deployment-1050/pods/test-recreate-deployment-5c8c9cc69d-6d8bb,UID:0e2cdc1d-e03e-40f8-8071-b1dcd4cdd062,ResourceVersion:11044691,Generation:0,CreationTimestamp:2020-05-15 13:57:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d d9ab629f-935b-4589-83e1-a972c2bbdb6c 0xc0026d7c17 0xc0026d7c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fhlff {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fhlff,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fhlff true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026d7c90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026d7cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:57:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:57:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:57:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 13:57:15 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-15 13:57:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:57:15.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1050" for this suite. May 15 13:57:22.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:57:22.127: INFO: namespace deployment-1050 deletion completed in 6.189034899s • [SLOW TEST:11.162 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:57:22.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 15 13:57:22.291: INFO: Waiting up to 5m0s for pod "client-containers-938d2adb-69a4-4f59-af30-d6eb82d5a4d6" in namespace "containers-4032" to be "success or failure" May 15 13:57:22.313: INFO: Pod "client-containers-938d2adb-69a4-4f59-af30-d6eb82d5a4d6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.341181ms May 15 13:57:24.316: INFO: Pod "client-containers-938d2adb-69a4-4f59-af30-d6eb82d5a4d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024698295s May 15 13:57:26.320: INFO: Pod "client-containers-938d2adb-69a4-4f59-af30-d6eb82d5a4d6": Phase="Running", Reason="", readiness=true. Elapsed: 4.029077201s May 15 13:57:28.325: INFO: Pod "client-containers-938d2adb-69a4-4f59-af30-d6eb82d5a4d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033636679s STEP: Saw pod success May 15 13:57:28.325: INFO: Pod "client-containers-938d2adb-69a4-4f59-af30-d6eb82d5a4d6" satisfied condition "success or failure" May 15 13:57:28.328: INFO: Trying to get logs from node iruya-worker2 pod client-containers-938d2adb-69a4-4f59-af30-d6eb82d5a4d6 container test-container: STEP: delete the pod May 15 13:57:28.349: INFO: Waiting for pod client-containers-938d2adb-69a4-4f59-af30-d6eb82d5a4d6 to disappear May 15 13:57:28.353: INFO: Pod client-containers-938d2adb-69a4-4f59-af30-d6eb82d5a4d6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:57:28.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4032" for this suite. May 15 13:57:34.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:57:34.471: INFO: namespace containers-4032 deletion completed in 6.115046897s • [SLOW TEST:12.344 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:57:34.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 13:57:34.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3071" for this suite. May 15 13:57:40.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 13:57:40.716: INFO: namespace services-3071 deletion completed in 6.126911134s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.245 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 13:57:40.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-4e3e6266-1c21-480a-8f07-7b9beebd4f4a in namespace container-probe-7662 May 15 13:57:44.836: INFO: Started pod liveness-4e3e6266-1c21-480a-8f07-7b9beebd4f4a in namespace container-probe-7662 STEP: checking the pod's current state and verifying that restartCount is present May 15 13:57:44.839: INFO: Initial restart count of pod liveness-4e3e6266-1c21-480a-8f07-7b9beebd4f4a is 0 May 15 13:58:02.875: INFO: Restart count of pod container-probe-7662/liveness-4e3e6266-1c21-480a-8f07-7b9beebd4f4a is now 1 (18.036626802s elapsed) May 15 13:58:22.991: INFO: Restart count of pod container-probe-7662/liveness-4e3e6266-1c21-480a-8f07-7b9beebd4f4a is now 2 (38.152690253s elapsed) May 15 13:58:43.032: INFO: Restart count of pod container-probe-7662/liveness-4e3e6266-1c21-480a-8f07-7b9beebd4f4a is now 3 (58.193411506s elapsed) May 15 13:59:03.073: INFO: Restart count of pod container-probe-7662/liveness-4e3e6266-1c21-480a-8f07-7b9beebd4f4a is now 4 (1m18.234271194s elapsed) May 15 14:00:05.280: INFO: Restart count of pod container-probe-7662/liveness-4e3e6266-1c21-480a-8f07-7b9beebd4f4a is now 5 (2m20.441192492s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:00:05.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7662" for this suite. May 15 14:00:11.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:00:11.378: INFO: namespace container-probe-7662 deletion completed in 6.081391202s • [SLOW TEST:150.661 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:00:11.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 15 14:00:21.526: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6449 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:00:21.527: INFO: >>> kubeConfig: /root/.kube/config I0515 14:00:21.564574 6 log.go:172] (0xc00165a210) (0xc0023e5a40) Create stream I0515 14:00:21.564618 6 log.go:172] (0xc00165a210) (0xc0023e5a40) Stream added, broadcasting: 1 I0515 14:00:21.567267 6 log.go:172] (0xc00165a210) Reply frame received for 1 I0515 14:00:21.567311 6 log.go:172] (0xc00165a210) (0xc002b9b4a0) Create stream I0515 14:00:21.567324 6 log.go:172] (0xc00165a210) (0xc002b9b4a0) Stream added, broadcasting: 3 I0515 14:00:21.568338 6 log.go:172] (0xc00165a210) Reply frame received for 3 I0515 14:00:21.568376 6 log.go:172] (0xc00165a210) (0xc00247c000) Create stream I0515 14:00:21.568388 6 log.go:172] (0xc00165a210) (0xc00247c000) Stream added, broadcasting: 5 I0515 14:00:21.569398 6 log.go:172] (0xc00165a210) Reply frame received for 5 I0515 14:00:21.658800 6 log.go:172] (0xc00165a210) Data frame received for 3 I0515 14:00:21.658831 6 log.go:172] (0xc002b9b4a0) (3) Data frame handling I0515 14:00:21.658853 6 log.go:172] (0xc00165a210) Data frame received for 5 I0515 14:00:21.658912 6 log.go:172] (0xc00247c000) (5) Data frame handling I0515 14:00:21.658949 6 log.go:172] (0xc002b9b4a0) (3) Data frame sent I0515 14:00:21.658969 6 log.go:172] (0xc00165a210) Data frame received for 3 I0515 14:00:21.658986 6 log.go:172] (0xc002b9b4a0) (3) Data frame handling I0515 14:00:21.660841 6 log.go:172] (0xc00165a210) Data frame received for 1 I0515 14:00:21.660864 6 log.go:172] (0xc0023e5a40) (1) Data frame handling I0515 14:00:21.660880 6 log.go:172] (0xc0023e5a40) (1) Data frame sent I0515 14:00:21.661020 6 log.go:172] (0xc00165a210) (0xc0023e5a40) Stream removed, broadcasting: 1 I0515 14:00:21.661381 6 log.go:172] (0xc00165a210) (0xc0023e5a40) Stream removed, broadcasting: 1 I0515 14:00:21.661414 6 log.go:172] (0xc00165a210) (0xc002b9b4a0) Stream removed, broadcasting: 3 I0515 14:00:21.661437 6 log.go:172] (0xc00165a210) (0xc00247c000) Stream removed, broadcasting: 5 May 15 14:00:21.661: INFO: Exec stderr: "" May 15 14:00:21.661: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6449 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:00:21.661: INFO: >>> kubeConfig: /root/.kube/config I0515 14:00:21.665510 6 log.go:172] (0xc00165a210) Go away received I0515 14:00:21.694228 6 log.go:172] (0xc00165a790) (0xc0023e5e00) Create stream I0515 14:00:21.694262 6 log.go:172] (0xc00165a790) (0xc0023e5e00) Stream added, broadcasting: 1 I0515 14:00:21.696638 6 log.go:172] (0xc00165a790) Reply frame received for 1 I0515 14:00:21.696682 6 log.go:172] (0xc00165a790) (0xc001c25f40) Create stream I0515 14:00:21.696696 6 log.go:172] (0xc00165a790) (0xc001c25f40) Stream added, broadcasting: 3 I0515 14:00:21.697880 6 log.go:172] (0xc00165a790) Reply frame received for 3 I0515 14:00:21.697925 6 log.go:172] (0xc00165a790) (0xc0012b60a0) Create stream I0515 14:00:21.697938 6 log.go:172] (0xc00165a790) (0xc0012b60a0) Stream added, broadcasting: 5 I0515 14:00:21.698812 6 log.go:172] (0xc00165a790) Reply frame received for 5 I0515 14:00:21.770804 6 log.go:172] (0xc00165a790) Data frame received for 5 I0515 14:00:21.770860 6 log.go:172] (0xc0012b60a0) (5) Data frame handling I0515 14:00:21.770892 6 log.go:172] (0xc00165a790) Data frame received for 3 I0515 14:00:21.770926 6 log.go:172] (0xc001c25f40) (3) Data frame handling I0515 14:00:21.770965 6 log.go:172] (0xc001c25f40) (3) Data frame sent I0515 14:00:21.770988 6 log.go:172] (0xc00165a790) Data frame received for 3 I0515 14:00:21.771003 6 log.go:172] (0xc001c25f40) (3) Data frame handling I0515 14:00:21.772214 6 log.go:172] (0xc00165a790) Data frame received for 1 I0515 14:00:21.772237 6 log.go:172] (0xc0023e5e00) (1) Data frame handling I0515 14:00:21.772248 6 log.go:172] (0xc0023e5e00) (1) Data frame sent I0515 14:00:21.772264 6 log.go:172] (0xc00165a790) (0xc0023e5e00) Stream removed, broadcasting: 1 I0515 14:00:21.772283 6 log.go:172] (0xc00165a790) Go away received I0515 14:00:21.772377 6 log.go:172] (0xc00165a790) (0xc0023e5e00) Stream removed, broadcasting: 1 I0515 14:00:21.772402 6 log.go:172] (0xc00165a790) (0xc001c25f40) Stream removed, broadcasting: 3 I0515 14:00:21.772422 6 log.go:172] (0xc00165a790) (0xc0012b60a0) Stream removed, broadcasting: 5 May 15 14:00:21.772: INFO: Exec stderr: "" May 15 14:00:21.772: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6449 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:00:21.772: INFO: >>> kubeConfig: /root/.kube/config I0515 14:00:21.804089 6 log.go:172] (0xc00165b080) (0xc002d3c1e0) Create stream I0515 14:00:21.804112 6 log.go:172] (0xc00165b080) (0xc002d3c1e0) Stream added, broadcasting: 1 I0515 14:00:21.807061 6 log.go:172] (0xc00165b080) Reply frame received for 1 I0515 14:00:21.807098 6 log.go:172] (0xc00165b080) (0xc002b9b7c0) Create stream I0515 14:00:21.807112 6 log.go:172] (0xc00165b080) (0xc002b9b7c0) Stream added, broadcasting: 3 I0515 14:00:21.808300 6 log.go:172] (0xc00165b080) Reply frame received for 3 I0515 14:00:21.808340 6 log.go:172] (0xc00165b080) (0xc002b9b900) Create stream I0515 14:00:21.808361 6 log.go:172] (0xc00165b080) (0xc002b9b900) Stream added, broadcasting: 5 I0515 14:00:21.809786 6 log.go:172] (0xc00165b080) Reply frame received for 5 I0515 14:00:21.873478 6 log.go:172] (0xc00165b080) Data frame received for 5 I0515 14:00:21.873521 6 log.go:172] (0xc002b9b900) (5) Data frame handling I0515 14:00:21.873546 6 log.go:172] (0xc00165b080) Data frame received for 3 I0515 14:00:21.873569 6 log.go:172] (0xc002b9b7c0) (3) Data frame handling I0515 14:00:21.873594 6 log.go:172] (0xc002b9b7c0) (3) Data frame sent I0515 14:00:21.873645 6 log.go:172] (0xc00165b080) Data frame received for 3 I0515 14:00:21.873667 6 log.go:172] (0xc002b9b7c0) (3) Data frame handling I0515 14:00:21.875060 6 log.go:172] (0xc00165b080) Data frame received for 1 I0515 14:00:21.875111 6 log.go:172] (0xc002d3c1e0) (1) Data frame handling I0515 14:00:21.875133 6 log.go:172] (0xc002d3c1e0) (1) Data frame sent I0515 14:00:21.875165 6 log.go:172] (0xc00165b080) (0xc002d3c1e0) Stream removed, broadcasting: 1 I0515 14:00:21.875183 6 log.go:172] (0xc00165b080) Go away received I0515 14:00:21.875346 6 log.go:172] (0xc00165b080) (0xc002d3c1e0) Stream removed, broadcasting: 1 I0515 14:00:21.875381 6 log.go:172] (0xc00165b080) (0xc002b9b7c0) Stream removed, broadcasting: 3 I0515 14:00:21.875392 6 log.go:172] (0xc00165b080) (0xc002b9b900) Stream removed, broadcasting: 5 May 15 14:00:21.875: INFO: Exec stderr: "" May 15 14:00:21.875: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6449 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:00:21.875: INFO: >>> kubeConfig: /root/.kube/config I0515 14:00:21.910338 6 log.go:172] (0xc001a80000) (0xc0012b6820) Create stream I0515 14:00:21.910363 6 log.go:172] (0xc001a80000) (0xc0012b6820) Stream added, broadcasting: 1 I0515 14:00:21.912596 6 log.go:172] (0xc001a80000) Reply frame received for 1 I0515 14:00:21.912634 6 log.go:172] (0xc001a80000) (0xc00247c0a0) Create stream I0515 14:00:21.912647 6 log.go:172] (0xc001a80000) (0xc00247c0a0) Stream added, broadcasting: 3 I0515 14:00:21.914010 6 log.go:172] (0xc001a80000) Reply frame received for 3 I0515 14:00:21.914055 6 log.go:172] (0xc001a80000) (0xc002d3c280) Create stream I0515 14:00:21.914069 6 log.go:172] (0xc001a80000) (0xc002d3c280) Stream added, broadcasting: 5 I0515 14:00:21.914851 6 log.go:172] (0xc001a80000) Reply frame received for 5 I0515 14:00:21.976615 6 log.go:172] (0xc001a80000) Data frame received for 5 I0515 14:00:21.976651 6 log.go:172] (0xc002d3c280) (5) Data frame handling I0515 14:00:21.976689 6 log.go:172] (0xc001a80000) Data frame received for 3 I0515 14:00:21.976704 6 log.go:172] (0xc00247c0a0) (3) Data frame handling I0515 14:00:21.976714 6 log.go:172] (0xc00247c0a0) (3) Data frame sent I0515 14:00:21.976722 6 log.go:172] (0xc001a80000) Data frame received for 3 I0515 14:00:21.976777 6 log.go:172] (0xc00247c0a0) (3) Data frame handling I0515 14:00:21.978410 6 log.go:172] (0xc001a80000) Data frame received for 1 I0515 14:00:21.978444 6 log.go:172] (0xc0012b6820) (1) Data frame handling I0515 14:00:21.978476 6 log.go:172] (0xc0012b6820) (1) Data frame sent I0515 14:00:21.978494 6 log.go:172] (0xc001a80000) (0xc0012b6820) Stream removed, broadcasting: 1 I0515 14:00:21.978511 6 log.go:172] (0xc001a80000) Go away received I0515 14:00:21.978634 6 log.go:172] (0xc001a80000) (0xc0012b6820) Stream removed, broadcasting: 1 I0515 14:00:21.978658 6 log.go:172] (0xc001a80000) (0xc00247c0a0) Stream removed, broadcasting: 3 I0515 14:00:21.978670 6 log.go:172] (0xc001a80000) (0xc002d3c280) Stream removed, broadcasting: 5 May 15 14:00:21.978: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 15 14:00:21.978: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6449 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:00:21.978: INFO: >>> kubeConfig: /root/.kube/config I0515 14:00:22.011853 6 log.go:172] (0xc003183080) (0xc00247c3c0) Create stream I0515 14:00:22.011914 6 log.go:172] (0xc003183080) (0xc00247c3c0) Stream added, broadcasting: 1 I0515 14:00:22.014450 6 log.go:172] (0xc003183080) Reply frame received for 1 I0515 14:00:22.014482 6 log.go:172] (0xc003183080) (0xc00247c460) Create stream I0515 14:00:22.014499 6 log.go:172] (0xc003183080) (0xc00247c460) Stream added, broadcasting: 3 I0515 14:00:22.015446 6 log.go:172] (0xc003183080) Reply frame received for 3 I0515 14:00:22.015491 6 log.go:172] (0xc003183080) (0xc002d3c3c0) Create stream I0515 14:00:22.015505 6 log.go:172] (0xc003183080) (0xc002d3c3c0) Stream added, broadcasting: 5 I0515 14:00:22.016430 6 log.go:172] (0xc003183080) Reply frame received for 5 I0515 14:00:22.080114 6 log.go:172] (0xc003183080) Data frame received for 3 I0515 14:00:22.080158 6 log.go:172] (0xc00247c460) (3) Data frame handling I0515 14:00:22.080193 6 log.go:172] (0xc00247c460) (3) Data frame sent I0515 14:00:22.080207 6 log.go:172] (0xc003183080) Data frame received for 3 I0515 14:00:22.080214 6 log.go:172] (0xc00247c460) (3) Data frame handling I0515 14:00:22.080277 6 log.go:172] (0xc003183080) Data frame received for 5 I0515 14:00:22.080326 6 log.go:172] (0xc002d3c3c0) (5) Data frame handling I0515 14:00:22.081789 6 log.go:172] (0xc003183080) Data frame received for 1 I0515 14:00:22.081802 6 log.go:172] (0xc00247c3c0) (1) Data frame handling I0515 14:00:22.081810 6 log.go:172] (0xc00247c3c0) (1) Data frame sent I0515 14:00:22.081822 6 log.go:172] (0xc003183080) (0xc00247c3c0) Stream removed, broadcasting: 1 I0515 14:00:22.081867 6 log.go:172] (0xc003183080) Go away received I0515 14:00:22.081913 6 log.go:172] (0xc003183080) (0xc00247c3c0) Stream removed, broadcasting: 1 I0515 14:00:22.081925 6 log.go:172] (0xc003183080) (0xc00247c460) Stream removed, broadcasting: 3 I0515 14:00:22.081952 6 log.go:172] (0xc003183080) (0xc002d3c3c0) Stream removed, broadcasting: 5 May 15 14:00:22.081: INFO: Exec stderr: "" May 15 14:00:22.082: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6449 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:00:22.082: INFO: >>> kubeConfig: /root/.kube/config I0515 14:00:22.115878 6 log.go:172] (0xc002b52c60) (0xc0004a66e0) Create stream I0515 14:00:22.115916 6 log.go:172] (0xc002b52c60) (0xc0004a66e0) Stream added, broadcasting: 1 I0515 14:00:22.118956 6 log.go:172] (0xc002b52c60) Reply frame received for 1 I0515 14:00:22.118993 6 log.go:172] (0xc002b52c60) (0xc0004a6c80) Create stream I0515 14:00:22.119005 6 log.go:172] (0xc002b52c60) (0xc0004a6c80) Stream added, broadcasting: 3 I0515 14:00:22.120044 6 log.go:172] (0xc002b52c60) Reply frame received for 3 I0515 14:00:22.120083 6 log.go:172] (0xc002b52c60) (0xc002b9b9a0) Create stream I0515 14:00:22.120097 6 log.go:172] (0xc002b52c60) (0xc002b9b9a0) Stream added, broadcasting: 5 I0515 14:00:22.121229 6 log.go:172] (0xc002b52c60) Reply frame received for 5 I0515 14:00:22.192490 6 log.go:172] (0xc002b52c60) Data frame received for 5 I0515 14:00:22.192530 6 log.go:172] (0xc002b9b9a0) (5) Data frame handling I0515 14:00:22.192554 6 log.go:172] (0xc002b52c60) Data frame received for 3 I0515 14:00:22.192566 6 log.go:172] (0xc0004a6c80) (3) Data frame handling I0515 14:00:22.192593 6 log.go:172] (0xc0004a6c80) (3) Data frame sent I0515 14:00:22.192616 6 log.go:172] (0xc002b52c60) Data frame received for 3 I0515 14:00:22.192637 6 log.go:172] (0xc0004a6c80) (3) Data frame handling I0515 14:00:22.193848 6 log.go:172] (0xc002b52c60) Data frame received for 1 I0515 14:00:22.193912 6 log.go:172] (0xc0004a66e0) (1) Data frame handling I0515 14:00:22.193928 6 log.go:172] (0xc0004a66e0) (1) Data frame sent I0515 14:00:22.193941 6 log.go:172] (0xc002b52c60) (0xc0004a66e0) Stream removed, broadcasting: 1 I0515 14:00:22.193955 6 log.go:172] (0xc002b52c60) Go away received I0515 14:00:22.194168 6 log.go:172] (0xc002b52c60) (0xc0004a66e0) Stream removed, broadcasting: 1 I0515 14:00:22.194184 6 log.go:172] (0xc002b52c60) (0xc0004a6c80) Stream removed, broadcasting: 3 I0515 14:00:22.194198 6 log.go:172] (0xc002b52c60) (0xc002b9b9a0) Stream removed, broadcasting: 5 May 15 14:00:22.194: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 15 14:00:22.194: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6449 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:00:22.194: INFO: >>> kubeConfig: /root/.kube/config I0515 14:00:22.220311 6 log.go:172] (0xc001e3ce70) (0xc002d3c960) Create stream I0515 14:00:22.220340 6 log.go:172] (0xc001e3ce70) (0xc002d3c960) Stream added, broadcasting: 1 I0515 14:00:22.228402 6 log.go:172] (0xc001e3ce70) Reply frame received for 1 I0515 14:00:22.228451 6 log.go:172] (0xc001e3ce70) (0xc0012b6d20) Create stream I0515 14:00:22.228464 6 log.go:172] (0xc001e3ce70) (0xc0012b6d20) Stream added, broadcasting: 3 I0515 14:00:22.229638 6 log.go:172] (0xc001e3ce70) Reply frame received for 3 I0515 14:00:22.229677 6 log.go:172] (0xc001e3ce70) (0xc0004a6d20) Create stream I0515 14:00:22.229698 6 log.go:172] (0xc001e3ce70) (0xc0004a6d20) Stream added, broadcasting: 5 I0515 14:00:22.231134 6 log.go:172] (0xc001e3ce70) Reply frame received for 5 I0515 14:00:22.276718 6 log.go:172] (0xc001e3ce70) Data frame received for 3 I0515 14:00:22.276755 6 log.go:172] (0xc0012b6d20) (3) Data frame handling I0515 14:00:22.276767 6 log.go:172] (0xc0012b6d20) (3) Data frame sent I0515 14:00:22.276772 6 log.go:172] (0xc001e3ce70) Data frame received for 3 I0515 14:00:22.276776 6 log.go:172] (0xc0012b6d20) (3) Data frame handling I0515 14:00:22.276794 6 log.go:172] (0xc001e3ce70) Data frame received for 5 I0515 14:00:22.276817 6 log.go:172] (0xc0004a6d20) (5) Data frame handling I0515 14:00:22.278478 6 log.go:172] (0xc001e3ce70) Data frame received for 1 I0515 14:00:22.278500 6 log.go:172] (0xc002d3c960) (1) Data frame handling I0515 14:00:22.278513 6 log.go:172] (0xc002d3c960) (1) Data frame sent I0515 14:00:22.278526 6 log.go:172] (0xc001e3ce70) (0xc002d3c960) Stream removed, broadcasting: 1 I0515 14:00:22.278559 6 log.go:172] (0xc001e3ce70) Go away received I0515 14:00:22.278660 6 log.go:172] (0xc001e3ce70) (0xc002d3c960) Stream removed, broadcasting: 1 I0515 14:00:22.278675 6 log.go:172] (0xc001e3ce70) (0xc0012b6d20) Stream removed, broadcasting: 3 I0515 14:00:22.278683 6 log.go:172] (0xc001e3ce70) (0xc0004a6d20) Stream removed, broadcasting: 5 May 15 14:00:22.278: INFO: Exec stderr: "" May 15 14:00:22.278: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6449 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:00:22.278: INFO: >>> kubeConfig: /root/.kube/config I0515 14:00:22.307201 6 log.go:172] (0xc002b538c0) (0xc0004a72c0) Create stream I0515 14:00:22.307234 6 log.go:172] (0xc002b538c0) (0xc0004a72c0) Stream added, broadcasting: 1 I0515 14:00:22.309753 6 log.go:172] (0xc002b538c0) Reply frame received for 1 I0515 14:00:22.309803 6 log.go:172] (0xc002b538c0) (0xc0004a7400) Create stream I0515 14:00:22.309819 6 log.go:172] (0xc002b538c0) (0xc0004a7400) Stream added, broadcasting: 3 I0515 14:00:22.311007 6 log.go:172] (0xc002b538c0) Reply frame received for 3 I0515 14:00:22.311051 6 log.go:172] (0xc002b538c0) (0xc002b9ba40) Create stream I0515 14:00:22.311071 6 log.go:172] (0xc002b538c0) (0xc002b9ba40) Stream added, broadcasting: 5 I0515 14:00:22.311955 6 log.go:172] (0xc002b538c0) Reply frame received for 5 I0515 14:00:22.385524 6 log.go:172] (0xc002b538c0) Data frame received for 5 I0515 14:00:22.385549 6 log.go:172] (0xc002b9ba40) (5) Data frame handling I0515 14:00:22.385568 6 log.go:172] (0xc002b538c0) Data frame received for 3 I0515 14:00:22.385581 6 log.go:172] (0xc0004a7400) (3) Data frame handling I0515 14:00:22.385588 6 log.go:172] (0xc0004a7400) (3) Data frame sent I0515 14:00:22.385608 6 log.go:172] (0xc002b538c0) Data frame received for 3 I0515 14:00:22.385623 6 log.go:172] (0xc0004a7400) (3) Data frame handling I0515 14:00:22.386976 6 log.go:172] (0xc002b538c0) Data frame received for 1 I0515 14:00:22.387008 6 log.go:172] (0xc0004a72c0) (1) Data frame handling I0515 14:00:22.387028 6 log.go:172] (0xc0004a72c0) (1) Data frame sent I0515 14:00:22.387052 6 log.go:172] (0xc002b538c0) (0xc0004a72c0) Stream removed, broadcasting: 1 I0515 14:00:22.387066 6 log.go:172] (0xc002b538c0) Go away received I0515 14:00:22.387184 6 log.go:172] (0xc002b538c0) (0xc0004a72c0) Stream removed, broadcasting: 1 I0515 14:00:22.387225 6 log.go:172] (0xc002b538c0) (0xc0004a7400) Stream removed, broadcasting: 3 I0515 14:00:22.387257 6 log.go:172] (0xc002b538c0) (0xc002b9ba40) Stream removed, broadcasting: 5 May 15 14:00:22.387: INFO: Exec stderr: "" May 15 14:00:22.387: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6449 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:00:22.387: INFO: >>> kubeConfig: /root/.kube/config I0515 14:00:22.420417 6 log.go:172] (0xc0029d2000) (0xc002d3cd20) Create stream I0515 14:00:22.420463 6 log.go:172] (0xc0029d2000) (0xc002d3cd20) Stream added, broadcasting: 1 I0515 14:00:22.423527 6 log.go:172] (0xc0029d2000) Reply frame received for 1 I0515 14:00:22.423588 6 log.go:172] (0xc0029d2000) (0xc0012b6dc0) Create stream I0515 14:00:22.423626 6 log.go:172] (0xc0029d2000) (0xc0012b6dc0) Stream added, broadcasting: 3 I0515 14:00:22.424593 6 log.go:172] (0xc0029d2000) Reply frame received for 3 I0515 14:00:22.424635 6 log.go:172] (0xc0029d2000) (0xc002b9bae0) Create stream I0515 14:00:22.424648 6 log.go:172] (0xc0029d2000) (0xc002b9bae0) Stream added, broadcasting: 5 I0515 14:00:22.425897 6 log.go:172] (0xc0029d2000) Reply frame received for 5 I0515 14:00:22.491000 6 log.go:172] (0xc0029d2000) Data frame received for 5 I0515 14:00:22.491041 6 log.go:172] (0xc002b9bae0) (5) Data frame handling I0515 14:00:22.491063 6 log.go:172] (0xc0029d2000) Data frame received for 3 I0515 14:00:22.491075 6 log.go:172] (0xc0012b6dc0) (3) Data frame handling I0515 14:00:22.491083 6 log.go:172] (0xc0012b6dc0) (3) Data frame sent I0515 14:00:22.491096 6 log.go:172] (0xc0029d2000) Data frame received for 3 I0515 14:00:22.491108 6 log.go:172] (0xc0012b6dc0) (3) Data frame handling I0515 14:00:22.492601 6 log.go:172] (0xc0029d2000) Data frame received for 1 I0515 14:00:22.492631 6 log.go:172] (0xc002d3cd20) (1) Data frame handling I0515 14:00:22.492656 6 log.go:172] (0xc002d3cd20) (1) Data frame sent I0515 14:00:22.492673 6 log.go:172] (0xc0029d2000) (0xc002d3cd20) Stream removed, broadcasting: 1 I0515 14:00:22.492690 6 log.go:172] (0xc0029d2000) Go away received I0515 14:00:22.492822 6 log.go:172] (0xc0029d2000) (0xc002d3cd20) Stream removed, broadcasting: 1 I0515 14:00:22.492841 6 log.go:172] (0xc0029d2000) (0xc0012b6dc0) Stream removed, broadcasting: 3 I0515 14:00:22.492852 6 log.go:172] (0xc0029d2000) (0xc002b9bae0) Stream removed, broadcasting: 5 May 15 14:00:22.492: INFO: Exec stderr: "" May 15 14:00:22.492: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6449 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:00:22.492: INFO: >>> kubeConfig: /root/.kube/config I0515 14:00:22.518578 6 log.go:172] (0xc00262aa50) (0xc002b9bf40) Create stream I0515 14:00:22.518601 6 log.go:172] (0xc00262aa50) (0xc002b9bf40) Stream added, broadcasting: 1 I0515 14:00:22.521489 6 log.go:172] (0xc00262aa50) Reply frame received for 1 I0515 14:00:22.521532 6 log.go:172] (0xc00262aa50) (0xc0004a74a0) Create stream I0515 14:00:22.521545 6 log.go:172] (0xc00262aa50) (0xc0004a74a0) Stream added, broadcasting: 3 I0515 14:00:22.522459 6 log.go:172] (0xc00262aa50) Reply frame received for 3 I0515 14:00:22.522491 6 log.go:172] (0xc00262aa50) (0xc0004a7540) Create stream I0515 14:00:22.522501 6 log.go:172] (0xc00262aa50) (0xc0004a7540) Stream added, broadcasting: 5 I0515 14:00:22.523433 6 log.go:172] (0xc00262aa50) Reply frame received for 5 I0515 14:00:22.594199 6 log.go:172] (0xc00262aa50) Data frame received for 5 I0515 14:00:22.594247 6 log.go:172] (0xc00262aa50) Data frame received for 3 I0515 14:00:22.594279 6 log.go:172] (0xc0004a74a0) (3) Data frame handling I0515 14:00:22.594299 6 log.go:172] (0xc0004a74a0) (3) Data frame sent I0515 14:00:22.594329 6 log.go:172] (0xc00262aa50) Data frame received for 3 I0515 14:00:22.594347 6 log.go:172] (0xc0004a74a0) (3) Data frame handling I0515 14:00:22.594377 6 log.go:172] (0xc0004a7540) (5) Data frame handling I0515 14:00:22.595452 6 log.go:172] (0xc00262aa50) Data frame received for 1 I0515 14:00:22.595480 6 log.go:172] (0xc002b9bf40) (1) Data frame handling I0515 14:00:22.595497 6 log.go:172] (0xc002b9bf40) (1) Data frame sent I0515 14:00:22.595517 6 log.go:172] (0xc00262aa50) (0xc002b9bf40) Stream removed, broadcasting: 1 I0515 14:00:22.595539 6 log.go:172] (0xc00262aa50) Go away received I0515 14:00:22.595728 6 log.go:172] (0xc00262aa50) (0xc002b9bf40) Stream removed, broadcasting: 1 I0515 14:00:22.595795 6 log.go:172] (0xc00262aa50) (0xc0004a74a0) Stream removed, broadcasting: 3 I0515 14:00:22.595818 6 log.go:172] (0xc00262aa50) (0xc0004a7540) Stream removed, broadcasting: 5 May 15 14:00:22.595: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:00:22.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6449" for this suite. May 15 14:01:12.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:01:12.698: INFO: namespace e2e-kubelet-etc-hosts-6449 deletion completed in 50.098474154s • [SLOW TEST:61.319 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:01:12.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:01:38.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-156" for this suite. May 15 14:01:44.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:01:45.053: INFO: namespace namespaces-156 deletion completed in 6.087524717s STEP: Destroying namespace "nsdeletetest-2474" for this suite. May 15 14:01:45.055: INFO: Namespace nsdeletetest-2474 was already deleted STEP: Destroying namespace "nsdeletetest-9239" for this suite. May 15 14:01:51.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:01:51.204: INFO: namespace nsdeletetest-9239 deletion completed in 6.148422438s • [SLOW TEST:38.506 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:01:51.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 15 14:01:51.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2998' May 15 14:01:54.368: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 15 14:01:54.368: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 15 14:01:54.373: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 15 14:01:54.399: INFO: scanned /root for discovery docs: May 15 14:01:54.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2998' May 15 14:02:11.878: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 15 14:02:11.878: INFO: stdout: "Created e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740\nScaling up e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 15 14:02:11.878: INFO: stdout: "Created e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740\nScaling up e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 15 14:02:11.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2998' May 15 14:02:11.967: INFO: stderr: "" May 15 14:02:11.967: INFO: stdout: "e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740-z47zn " May 15 14:02:11.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740-z47zn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2998' May 15 14:02:12.055: INFO: stderr: "" May 15 14:02:12.055: INFO: stdout: "true" May 15 14:02:12.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740-z47zn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2998' May 15 14:02:12.157: INFO: stderr: "" May 15 14:02:12.157: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 15 14:02:12.157: INFO: e2e-test-nginx-rc-a3dc2c273a83da8aa75000cb0cc34740-z47zn is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 15 14:02:12.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2998' May 15 14:02:12.273: INFO: stderr: "" May 15 14:02:12.273: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:02:12.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2998" for this suite. May 15 14:02:34.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:02:34.408: INFO: namespace kubectl-2998 deletion completed in 22.106852852s • [SLOW TEST:43.203 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:02:34.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:02:40.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1380" for this suite. May 15 14:03:24.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:03:24.684: INFO: namespace kubelet-test-1380 deletion completed in 44.116199109s • [SLOW TEST:50.275 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:03:24.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-8a8f9bef-4d85-457e-ad01-10182a6ad23c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-8a8f9bef-4d85-457e-ad01-10182a6ad23c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:04:43.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2920" for this suite. May 15 14:05:05.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:05:05.326: INFO: namespace configmap-2920 deletion completed in 22.110793333s • [SLOW TEST:100.641 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:05:05.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 15 14:05:05.414: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-501,SelfLink:/api/v1/namespaces/watch-501/configmaps/e2e-watch-test-label-changed,UID:6e7f7f73-783b-40c0-a738-1b4eec3eb723,ResourceVersion:11045956,Generation:0,CreationTimestamp:2020-05-15 14:05:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 15 14:05:05.414: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-501,SelfLink:/api/v1/namespaces/watch-501/configmaps/e2e-watch-test-label-changed,UID:6e7f7f73-783b-40c0-a738-1b4eec3eb723,ResourceVersion:11045957,Generation:0,CreationTimestamp:2020-05-15 14:05:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 15 14:05:05.415: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-501,SelfLink:/api/v1/namespaces/watch-501/configmaps/e2e-watch-test-label-changed,UID:6e7f7f73-783b-40c0-a738-1b4eec3eb723,ResourceVersion:11045958,Generation:0,CreationTimestamp:2020-05-15 14:05:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 15 14:05:15.456: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-501,SelfLink:/api/v1/namespaces/watch-501/configmaps/e2e-watch-test-label-changed,UID:6e7f7f73-783b-40c0-a738-1b4eec3eb723,ResourceVersion:11045979,Generation:0,CreationTimestamp:2020-05-15 14:05:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 15 14:05:15.457: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-501,SelfLink:/api/v1/namespaces/watch-501/configmaps/e2e-watch-test-label-changed,UID:6e7f7f73-783b-40c0-a738-1b4eec3eb723,ResourceVersion:11045980,Generation:0,CreationTimestamp:2020-05-15 14:05:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 15 14:05:15.457: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-501,SelfLink:/api/v1/namespaces/watch-501/configmaps/e2e-watch-test-label-changed,UID:6e7f7f73-783b-40c0-a738-1b4eec3eb723,ResourceVersion:11045981,Generation:0,CreationTimestamp:2020-05-15 14:05:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:05:15.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-501" for this suite. May 15 14:05:21.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:05:21.529: INFO: namespace watch-501 deletion completed in 6.068040347s • [SLOW TEST:16.202 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:05:21.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8554 STEP: creating a selector STEP: Creating the service pods in kubernetes May 15 14:05:21.583: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 15 14:05:43.774: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.117:8080/dial?request=hostName&protocol=udp&host=10.244.2.116&port=8081&tries=1'] Namespace:pod-network-test-8554 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:05:43.774: INFO: >>> kubeConfig: /root/.kube/config I0515 14:05:43.803433 6 log.go:172] (0xc0007bc6e0) (0xc0012fa960) Create stream I0515 14:05:43.803460 6 log.go:172] (0xc0007bc6e0) (0xc0012fa960) Stream added, broadcasting: 1 I0515 14:05:43.805627 6 log.go:172] (0xc0007bc6e0) Reply frame received for 1 I0515 14:05:43.805675 6 log.go:172] (0xc0007bc6e0) (0xc000ade0a0) Create stream I0515 14:05:43.805696 6 log.go:172] (0xc0007bc6e0) (0xc000ade0a0) Stream added, broadcasting: 3 I0515 14:05:43.806748 6 log.go:172] (0xc0007bc6e0) Reply frame received for 3 I0515 14:05:43.806801 6 log.go:172] (0xc0007bc6e0) (0xc0003a6000) Create stream I0515 14:05:43.806823 6 log.go:172] (0xc0007bc6e0) (0xc0003a6000) Stream added, broadcasting: 5 I0515 14:05:43.808087 6 log.go:172] (0xc0007bc6e0) Reply frame received for 5 I0515 14:05:43.887150 6 log.go:172] (0xc0007bc6e0) Data frame received for 3 I0515 14:05:43.887183 6 log.go:172] (0xc000ade0a0) (3) Data frame handling I0515 14:05:43.887193 6 log.go:172] (0xc000ade0a0) (3) Data frame sent I0515 14:05:43.887980 6 log.go:172] (0xc0007bc6e0) Data frame received for 3 I0515 14:05:43.887993 6 log.go:172] (0xc000ade0a0) (3) Data frame handling I0515 14:05:43.888371 6 log.go:172] (0xc0007bc6e0) Data frame received for 5 I0515 14:05:43.888393 6 log.go:172] (0xc0003a6000) (5) Data frame handling I0515 14:05:43.890220 6 log.go:172] (0xc0007bc6e0) Data frame received for 1 I0515 14:05:43.890266 6 log.go:172] (0xc0012fa960) (1) Data frame handling I0515 14:05:43.890313 6 log.go:172] (0xc0012fa960) (1) Data frame sent I0515 14:05:43.890357 6 log.go:172] (0xc0007bc6e0) (0xc0012fa960) Stream removed, broadcasting: 1 I0515 14:05:43.890380 6 log.go:172] (0xc0007bc6e0) Go away received I0515 14:05:43.890544 6 log.go:172] (0xc0007bc6e0) (0xc0012fa960) Stream removed, broadcasting: 1 I0515 14:05:43.890573 6 log.go:172] (0xc0007bc6e0) (0xc000ade0a0) Stream removed, broadcasting: 3 I0515 14:05:43.890583 6 log.go:172] (0xc0007bc6e0) (0xc0003a6000) Stream removed, broadcasting: 5 May 15 14:05:43.890: INFO: Waiting for endpoints: map[] May 15 14:05:43.893: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.117:8080/dial?request=hostName&protocol=udp&host=10.244.1.32&port=8081&tries=1'] Namespace:pod-network-test-8554 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 14:05:43.893: INFO: >>> kubeConfig: /root/.kube/config I0515 14:05:43.919828 6 log.go:172] (0xc0007bcf20) (0xc0012faf00) Create stream I0515 14:05:43.919854 6 log.go:172] (0xc0007bcf20) (0xc0012faf00) Stream added, broadcasting: 1 I0515 14:05:43.922028 6 log.go:172] (0xc0007bcf20) Reply frame received for 1 I0515 14:05:43.922071 6 log.go:172] (0xc0007bcf20) (0xc0012fb180) Create stream I0515 14:05:43.922086 6 log.go:172] (0xc0007bcf20) (0xc0012fb180) Stream added, broadcasting: 3 I0515 14:05:43.923038 6 log.go:172] (0xc0007bcf20) Reply frame received for 3 I0515 14:05:43.923070 6 log.go:172] (0xc0007bcf20) (0xc0004a6500) Create stream I0515 14:05:43.923082 6 log.go:172] (0xc0007bcf20) (0xc0004a6500) Stream added, broadcasting: 5 I0515 14:05:43.923969 6 log.go:172] (0xc0007bcf20) Reply frame received for 5 I0515 14:05:44.001469 6 log.go:172] (0xc0007bcf20) Data frame received for 3 I0515 14:05:44.001497 6 log.go:172] (0xc0012fb180) (3) Data frame handling I0515 14:05:44.001514 6 log.go:172] (0xc0012fb180) (3) Data frame sent I0515 14:05:44.002016 6 log.go:172] (0xc0007bcf20) Data frame received for 3 I0515 14:05:44.002035 6 log.go:172] (0xc0007bcf20) Data frame received for 5 I0515 14:05:44.002059 6 log.go:172] (0xc0004a6500) (5) Data frame handling I0515 14:05:44.002082 6 log.go:172] (0xc0012fb180) (3) Data frame handling I0515 14:05:44.003324 6 log.go:172] (0xc0007bcf20) Data frame received for 1 I0515 14:05:44.003345 6 log.go:172] (0xc0012faf00) (1) Data frame handling I0515 14:05:44.003360 6 log.go:172] (0xc0012faf00) (1) Data frame sent I0515 14:05:44.003380 6 log.go:172] (0xc0007bcf20) (0xc0012faf00) Stream removed, broadcasting: 1 I0515 14:05:44.003402 6 log.go:172] (0xc0007bcf20) Go away received I0515 14:05:44.003566 6 log.go:172] (0xc0007bcf20) (0xc0012faf00) Stream removed, broadcasting: 1 I0515 14:05:44.003583 6 log.go:172] (0xc0007bcf20) (0xc0012fb180) Stream removed, broadcasting: 3 I0515 14:05:44.003591 6 log.go:172] (0xc0007bcf20) (0xc0004a6500) Stream removed, broadcasting: 5 May 15 14:05:44.003: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:05:44.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8554" for this suite. May 15 14:06:08.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:06:08.117: INFO: namespace pod-network-test-8554 deletion completed in 24.105601034s • [SLOW TEST:46.588 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:06:08.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-85ef16aa-d291-460c-bdc1-2634ea77e3c6 STEP: Creating a pod to test consume secrets May 15 14:06:08.314: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bc4f038d-4766-43a3-b4ea-d6af695bd74c" in namespace "projected-5107" to be "success or failure" May 15 14:06:08.318: INFO: Pod "pod-projected-secrets-bc4f038d-4766-43a3-b4ea-d6af695bd74c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046273ms May 15 14:06:10.323: INFO: Pod "pod-projected-secrets-bc4f038d-4766-43a3-b4ea-d6af695bd74c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009522991s May 15 14:06:12.332: INFO: Pod "pod-projected-secrets-bc4f038d-4766-43a3-b4ea-d6af695bd74c": Phase="Running", Reason="", readiness=true. Elapsed: 4.017790062s May 15 14:06:14.336: INFO: Pod "pod-projected-secrets-bc4f038d-4766-43a3-b4ea-d6af695bd74c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021816199s STEP: Saw pod success May 15 14:06:14.336: INFO: Pod "pod-projected-secrets-bc4f038d-4766-43a3-b4ea-d6af695bd74c" satisfied condition "success or failure" May 15 14:06:14.340: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-bc4f038d-4766-43a3-b4ea-d6af695bd74c container projected-secret-volume-test: STEP: delete the pod May 15 14:06:14.390: INFO: Waiting for pod pod-projected-secrets-bc4f038d-4766-43a3-b4ea-d6af695bd74c to disappear May 15 14:06:14.456: INFO: Pod pod-projected-secrets-bc4f038d-4766-43a3-b4ea-d6af695bd74c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:06:14.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5107" for this suite. May 15 14:06:20.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:06:20.697: INFO: namespace projected-5107 deletion completed in 6.235900613s • [SLOW TEST:12.580 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:06:20.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:06:24.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4089" for this suite. May 15 14:07:14.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:07:14.934: INFO: namespace kubelet-test-4089 deletion completed in 50.077331505s • [SLOW TEST:54.236 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:07:14.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-33508412-756c-468f-8ead-d60ac357ce8a STEP: Creating a pod to test consume configMaps May 15 14:07:14.997: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0074b1cd-9eeb-4159-8a95-96362edc1cbf" in namespace "projected-8237" to be "success or failure" May 15 14:07:15.002: INFO: Pod "pod-projected-configmaps-0074b1cd-9eeb-4159-8a95-96362edc1cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.648608ms May 15 14:07:17.020: INFO: Pod "pod-projected-configmaps-0074b1cd-9eeb-4159-8a95-96362edc1cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02255537s May 15 14:07:19.023: INFO: Pod "pod-projected-configmaps-0074b1cd-9eeb-4159-8a95-96362edc1cbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026195025s STEP: Saw pod success May 15 14:07:19.023: INFO: Pod "pod-projected-configmaps-0074b1cd-9eeb-4159-8a95-96362edc1cbf" satisfied condition "success or failure" May 15 14:07:19.026: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0074b1cd-9eeb-4159-8a95-96362edc1cbf container projected-configmap-volume-test: STEP: delete the pod May 15 14:07:19.056: INFO: Waiting for pod pod-projected-configmaps-0074b1cd-9eeb-4159-8a95-96362edc1cbf to disappear May 15 14:07:19.115: INFO: Pod pod-projected-configmaps-0074b1cd-9eeb-4159-8a95-96362edc1cbf no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:07:19.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8237" for this suite. May 15 14:07:25.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:07:25.247: INFO: namespace projected-8237 deletion completed in 6.128168531s • [SLOW TEST:10.313 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:07:25.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-lhk6 STEP: Creating a pod to test atomic-volume-subpath May 15 14:07:25.372: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lhk6" in namespace "subpath-9054" to be "success or failure" May 15 14:07:25.378: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.936322ms May 15 14:07:27.547: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175074747s May 15 14:07:29.550: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Running", Reason="", readiness=true. Elapsed: 4.178614743s May 15 14:07:31.555: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Running", Reason="", readiness=true. Elapsed: 6.183778793s May 15 14:07:33.560: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Running", Reason="", readiness=true. Elapsed: 8.188503166s May 15 14:07:35.564: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Running", Reason="", readiness=true. Elapsed: 10.192847456s May 15 14:07:37.576: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Running", Reason="", readiness=true. Elapsed: 12.204158262s May 15 14:07:39.579: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Running", Reason="", readiness=true. Elapsed: 14.207425137s May 15 14:07:41.582: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Running", Reason="", readiness=true. Elapsed: 16.210636796s May 15 14:07:43.590: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Running", Reason="", readiness=true. Elapsed: 18.217941542s May 15 14:07:45.594: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Running", Reason="", readiness=true. Elapsed: 20.222489888s May 15 14:07:47.598: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Running", Reason="", readiness=true. Elapsed: 22.226104711s May 15 14:07:49.602: INFO: Pod "pod-subpath-test-configmap-lhk6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.230490267s STEP: Saw pod success May 15 14:07:49.602: INFO: Pod "pod-subpath-test-configmap-lhk6" satisfied condition "success or failure" May 15 14:07:49.605: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-lhk6 container test-container-subpath-configmap-lhk6: STEP: delete the pod May 15 14:07:49.640: INFO: Waiting for pod pod-subpath-test-configmap-lhk6 to disappear May 15 14:07:49.683: INFO: Pod pod-subpath-test-configmap-lhk6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-lhk6 May 15 14:07:49.683: INFO: Deleting pod "pod-subpath-test-configmap-lhk6" in namespace "subpath-9054" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:07:49.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9054" for this suite. May 15 14:07:55.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:07:55.790: INFO: namespace subpath-9054 deletion completed in 6.100098609s • [SLOW TEST:30.542 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:07:55.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 15 14:07:55.852: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 14:07:55.858: INFO: Waiting for terminating namespaces to be deleted... May 15 14:07:55.860: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 15 14:07:55.864: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 15 14:07:55.864: INFO: Container kube-proxy ready: true, restart count 0 May 15 14:07:55.864: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 15 14:07:55.864: INFO: Container kindnet-cni ready: true, restart count 0 May 15 14:07:55.864: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 15 14:07:55.886: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 15 14:07:55.886: INFO: Container kube-proxy ready: true, restart count 0 May 15 14:07:55.886: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 15 14:07:55.886: INFO: Container kindnet-cni ready: true, restart count 0 May 15 14:07:55.886: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 15 14:07:55.886: INFO: Container coredns ready: true, restart count 0 May 15 14:07:55.886: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 15 14:07:55.886: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 15 14:07:55.961: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 15 14:07:55.961: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 15 14:07:55.961: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 15 14:07:55.961: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 15 14:07:55.961: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 15 14:07:55.961: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-69bfc97a-a55f-4f7d-841f-3b3f8250ab2e.160f38d3d6e6ac7d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9689/filler-pod-69bfc97a-a55f-4f7d-841f-3b3f8250ab2e to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-69bfc97a-a55f-4f7d-841f-3b3f8250ab2e.160f38d42907de11], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-69bfc97a-a55f-4f7d-841f-3b3f8250ab2e.160f38d47e260711], Reason = [Created], Message = [Created container filler-pod-69bfc97a-a55f-4f7d-841f-3b3f8250ab2e] STEP: Considering event: Type = [Normal], Name = [filler-pod-69bfc97a-a55f-4f7d-841f-3b3f8250ab2e.160f38d4a79e25e8], Reason = [Started], Message = [Started container filler-pod-69bfc97a-a55f-4f7d-841f-3b3f8250ab2e] STEP: Considering event: Type = [Normal], Name = [filler-pod-bdb49cb4-b3ab-435c-8455-32c13e836ba4.160f38d3dac113b0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9689/filler-pod-bdb49cb4-b3ab-435c-8455-32c13e836ba4 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-bdb49cb4-b3ab-435c-8455-32c13e836ba4.160f38d450903eda], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-bdb49cb4-b3ab-435c-8455-32c13e836ba4.160f38d4b17473ea], Reason = [Created], Message = [Created container filler-pod-bdb49cb4-b3ab-435c-8455-32c13e836ba4] STEP: Considering event: Type = [Normal], Name = [filler-pod-bdb49cb4-b3ab-435c-8455-32c13e836ba4.160f38d4c1028bd0], Reason = [Started], Message = [Started container filler-pod-bdb49cb4-b3ab-435c-8455-32c13e836ba4] STEP: Considering event: Type = [Warning], Name = [additional-pod.160f38d541998519], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:08:03.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9689" for this suite. May 15 14:08:09.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:08:09.239: INFO: namespace sched-pred-9689 deletion completed in 6.077618771s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.449 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:08:09.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 15 14:08:09.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6153' May 15 14:08:09.533: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 15 14:08:09.533: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 15 14:08:09.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-6153' May 15 14:08:09.688: INFO: stderr: "" May 15 14:08:09.688: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:08:09.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6153" for this suite. May 15 14:08:15.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:08:15.880: INFO: namespace kubectl-6153 deletion completed in 6.188735871s • [SLOW TEST:6.640 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:08:15.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 15 14:08:15.940: INFO: Waiting up to 5m0s for pod "pod-6e932f76-2448-4682-9a6e-33ef3fc1e693" in namespace "emptydir-451" to be "success or failure" May 15 14:08:15.960: INFO: Pod "pod-6e932f76-2448-4682-9a6e-33ef3fc1e693": Phase="Pending", Reason="", readiness=false. Elapsed: 19.802254ms May 15 14:08:18.016: INFO: Pod "pod-6e932f76-2448-4682-9a6e-33ef3fc1e693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075310422s May 15 14:08:20.019: INFO: Pod "pod-6e932f76-2448-4682-9a6e-33ef3fc1e693": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078224856s STEP: Saw pod success May 15 14:08:20.019: INFO: Pod "pod-6e932f76-2448-4682-9a6e-33ef3fc1e693" satisfied condition "success or failure" May 15 14:08:20.021: INFO: Trying to get logs from node iruya-worker2 pod pod-6e932f76-2448-4682-9a6e-33ef3fc1e693 container test-container: STEP: delete the pod May 15 14:08:20.047: INFO: Waiting for pod pod-6e932f76-2448-4682-9a6e-33ef3fc1e693 to disappear May 15 14:08:20.094: INFO: Pod pod-6e932f76-2448-4682-9a6e-33ef3fc1e693 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:08:20.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-451" for this suite. May 15 14:08:26.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:08:26.342: INFO: namespace emptydir-451 deletion completed in 6.244657285s • [SLOW TEST:10.462 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:08:26.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 14:08:26.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 15 14:08:26.553: INFO: stderr: "" May 15 14:08:26.553: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:08:26.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5360" for this suite. May 15 14:08:32.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:08:32.636: INFO: namespace kubectl-5360 deletion completed in 6.078537819s • [SLOW TEST:6.294 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:08:32.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 15 14:08:32.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8135' May 15 14:08:33.083: INFO: stderr: "" May 15 14:08:33.083: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 14:08:33.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135' May 15 14:08:33.257: INFO: stderr: "" May 15 14:08:33.257: INFO: stdout: "update-demo-nautilus-kslm2 update-demo-nautilus-m9w2v " May 15 14:08:33.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kslm2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135' May 15 14:08:33.350: INFO: stderr: "" May 15 14:08:33.350: INFO: stdout: "" May 15 14:08:33.350: INFO: update-demo-nautilus-kslm2 is created but not running May 15 14:08:38.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135' May 15 14:08:38.462: INFO: stderr: "" May 15 14:08:38.462: INFO: stdout: "update-demo-nautilus-kslm2 update-demo-nautilus-m9w2v " May 15 14:08:38.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kslm2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135' May 15 14:08:38.563: INFO: stderr: "" May 15 14:08:38.563: INFO: stdout: "true" May 15 14:08:38.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kslm2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8135' May 15 14:08:38.660: INFO: stderr: "" May 15 14:08:38.660: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 14:08:38.660: INFO: validating pod update-demo-nautilus-kslm2 May 15 14:08:38.664: INFO: got data: { "image": "nautilus.jpg" } May 15 14:08:38.664: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 14:08:38.664: INFO: update-demo-nautilus-kslm2 is verified up and running May 15 14:08:38.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9w2v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135' May 15 14:08:38.765: INFO: stderr: "" May 15 14:08:38.765: INFO: stdout: "true" May 15 14:08:38.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9w2v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8135' May 15 14:08:38.857: INFO: stderr: "" May 15 14:08:38.857: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 14:08:38.857: INFO: validating pod update-demo-nautilus-m9w2v May 15 14:08:38.861: INFO: got data: { "image": "nautilus.jpg" } May 15 14:08:38.861: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 14:08:38.861: INFO: update-demo-nautilus-m9w2v is verified up and running STEP: rolling-update to new replication controller May 15 14:08:38.864: INFO: scanned /root for discovery docs: May 15 14:08:38.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8135' May 15 14:09:01.510: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 15 14:09:01.510: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 14:09:01.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135' May 15 14:09:01.609: INFO: stderr: "" May 15 14:09:01.609: INFO: stdout: "update-demo-kitten-8bvr5 update-demo-kitten-psfsd " May 15 14:09:01.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8bvr5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135' May 15 14:09:01.701: INFO: stderr: "" May 15 14:09:01.701: INFO: stdout: "true" May 15 14:09:01.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8bvr5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8135' May 15 14:09:01.784: INFO: stderr: "" May 15 14:09:01.784: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 15 14:09:01.784: INFO: validating pod update-demo-kitten-8bvr5 May 15 14:09:01.789: INFO: got data: { "image": "kitten.jpg" } May 15 14:09:01.789: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 15 14:09:01.789: INFO: update-demo-kitten-8bvr5 is verified up and running May 15 14:09:01.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-psfsd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135' May 15 14:09:01.876: INFO: stderr: "" May 15 14:09:01.877: INFO: stdout: "true" May 15 14:09:01.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-psfsd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8135' May 15 14:09:01.971: INFO: stderr: "" May 15 14:09:01.971: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 15 14:09:01.971: INFO: validating pod update-demo-kitten-psfsd May 15 14:09:01.985: INFO: got data: { "image": "kitten.jpg" } May 15 14:09:01.985: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 15 14:09:01.985: INFO: update-demo-kitten-psfsd is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:09:01.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8135" for this suite. May 15 14:09:26.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:09:26.124: INFO: namespace kubectl-8135 deletion completed in 24.120010856s • [SLOW TEST:53.488 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:09:26.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 15 14:09:26.226: INFO: Waiting up to 5m0s for pod "pod-e5837869-4af6-4cf8-83f9-ea1455d4e481" in namespace "emptydir-2319" to be "success or failure" May 15 14:09:26.230: INFO: Pod "pod-e5837869-4af6-4cf8-83f9-ea1455d4e481": Phase="Pending", Reason="", readiness=false. Elapsed: 3.710758ms May 15 14:09:28.273: INFO: Pod "pod-e5837869-4af6-4cf8-83f9-ea1455d4e481": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046411433s May 15 14:09:30.277: INFO: Pod "pod-e5837869-4af6-4cf8-83f9-ea1455d4e481": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050398689s STEP: Saw pod success May 15 14:09:30.277: INFO: Pod "pod-e5837869-4af6-4cf8-83f9-ea1455d4e481" satisfied condition "success or failure" May 15 14:09:30.279: INFO: Trying to get logs from node iruya-worker pod pod-e5837869-4af6-4cf8-83f9-ea1455d4e481 container test-container: STEP: delete the pod May 15 14:09:30.435: INFO: Waiting for pod pod-e5837869-4af6-4cf8-83f9-ea1455d4e481 to disappear May 15 14:09:30.452: INFO: Pod pod-e5837869-4af6-4cf8-83f9-ea1455d4e481 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:09:30.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2319" for this suite. May 15 14:09:36.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:09:36.661: INFO: namespace emptydir-2319 deletion completed in 6.206251s • [SLOW TEST:10.535 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:09:36.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 14:09:36.691: INFO: Creating deployment "nginx-deployment" May 15 14:09:36.729: INFO: Waiting for observed generation 1 May 15 14:09:38.800: INFO: Waiting for all required pods to come up May 15 14:09:38.806: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 15 14:09:48.815: INFO: Waiting for deployment "nginx-deployment" to complete May 15 14:09:48.821: INFO: Updating deployment "nginx-deployment" with a non-existent image May 15 14:09:48.826: INFO: Updating deployment nginx-deployment May 15 14:09:48.826: INFO: Waiting for observed generation 2 May 15 14:09:50.836: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 15 14:09:50.839: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 15 14:09:50.841: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 15 14:09:50.847: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 15 14:09:50.847: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 15 14:09:50.850: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 15 14:09:50.853: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 15 14:09:50.853: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 15 14:09:50.859: INFO: Updating deployment nginx-deployment May 15 14:09:50.859: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 15 14:09:51.082: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 15 14:09:51.695: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 15 14:09:54.514: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6444,SelfLink:/apis/apps/v1/namespaces/deployment-6444/deployments/nginx-deployment,UID:86891567-0144-46c4-ac21-d5819a1da157,ResourceVersion:11047188,Generation:3,CreationTimestamp:2020-05-15 14:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-15 14:09:51 +0000 UTC 2020-05-15 14:09:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-15 14:09:52 +0000 UTC 2020-05-15 14:09:36 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 15 14:09:54.610: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6444,SelfLink:/apis/apps/v1/namespaces/deployment-6444/replicasets/nginx-deployment-55fb7cb77f,UID:749e5a70-c9f0-4417-bb5f-dfec357d6d3d,ResourceVersion:11047181,Generation:3,CreationTimestamp:2020-05-15 14:09:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 86891567-0144-46c4-ac21-d5819a1da157 0xc00327bfc7 0xc00327bfc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 15 14:09:54.610: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 15 14:09:54.610: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6444,SelfLink:/apis/apps/v1/namespaces/deployment-6444/replicasets/nginx-deployment-7b8c6f4498,UID:bc3361f0-fa18-4cbd-8c1e-91c62e189661,ResourceVersion:11047175,Generation:3,CreationTimestamp:2020-05-15 14:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 86891567-0144-46c4-ac21-d5819a1da157 0xc0026aa097 0xc0026aa098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 15 14:09:54.737: INFO: Pod "nginx-deployment-55fb7cb77f-2kf4g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2kf4g,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-2kf4g,UID:73fb7337-4258-483f-9cc5-fe743c331a47,ResourceVersion:11047204,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026aa9f7 0xc0026aa9f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aaa70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aaa90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-15 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.737: INFO: Pod "nginx-deployment-55fb7cb77f-4znq6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4znq6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-4znq6,UID:fe06012e-04ab-48b5-9eea-42000bc483c8,ResourceVersion:11047113,Generation:0,CreationTimestamp:2020-05-15 14:09:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026aab67 0xc0026aab68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aabe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aac00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-15 14:09:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.737: INFO: Pod "nginx-deployment-55fb7cb77f-8xvp5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8xvp5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-8xvp5,UID:7186c6fb-321b-4747-bdbc-afcee03a60f5,ResourceVersion:11047172,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026aacf7 0xc0026aacf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aad70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aad90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.737: INFO: Pod "nginx-deployment-55fb7cb77f-9v7cz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9v7cz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-9v7cz,UID:f5140d22-dd67-4416-a873-7fa76d6c5d66,ResourceVersion:11047224,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026aae17 0xc0026aae18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026aae90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026aaeb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-15 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.738: INFO: Pod "nginx-deployment-55fb7cb77f-clmqg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-clmqg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-clmqg,UID:fc5f0872-8898-46eb-bd51-0f12e6506e7f,ResourceVersion:11047170,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026aaf87 0xc0026aaf88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab000} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.738: INFO: Pod "nginx-deployment-55fb7cb77f-jxh96" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jxh96,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-jxh96,UID:4364d02d-17e4-434f-97b8-ed955871e65b,ResourceVersion:11047104,Generation:0,CreationTimestamp:2020-05-15 14:09:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026ab0a7 0xc0026ab0a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab120} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-15 14:09:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.738: INFO: Pod "nginx-deployment-55fb7cb77f-kmljn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kmljn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-kmljn,UID:e7334e1a-79cf-441d-8354-6cde7964434d,ResourceVersion:11047195,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026ab217 0xc0026ab218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-15 14:09:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.738: INFO: Pod "nginx-deployment-55fb7cb77f-nrh7m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nrh7m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-nrh7m,UID:f66d1b57-b80c-4664-afee-0d9a01ca99ae,ResourceVersion:11047169,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026ab387 0xc0026ab388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab400} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.739: INFO: Pod "nginx-deployment-55fb7cb77f-svt7r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-svt7r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-svt7r,UID:276ed446-aa80-43f1-ba2a-2cce489b561a,ResourceVersion:11047116,Generation:0,CreationTimestamp:2020-05-15 14:09:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026ab4a7 0xc0026ab4a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab520} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-15 14:09:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.739: INFO: Pod "nginx-deployment-55fb7cb77f-sxn4j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sxn4j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-sxn4j,UID:1499e27d-dff6-4545-9d34-57a45d848920,ResourceVersion:11047091,Generation:0,CreationTimestamp:2020-05-15 14:09:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026ab617 0xc0026ab618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab690} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-15 14:09:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.739: INFO: Pod "nginx-deployment-55fb7cb77f-wfsxz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wfsxz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-wfsxz,UID:a72c30d6-a90f-44d0-a60a-fbb351e1b474,ResourceVersion:11047213,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026ab787 0xc0026ab788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab800} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-15 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.739: INFO: Pod "nginx-deployment-55fb7cb77f-wjf9w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wjf9w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-wjf9w,UID:ce24d1d5-7aa2-4ebd-acad-d2f5d31fae5a,ResourceVersion:11047185,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026ab8f7 0xc0026ab8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ab970} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ab990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-15 14:09:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.739: INFO: Pod "nginx-deployment-55fb7cb77f-zz4z7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zz4z7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-55fb7cb77f-zz4z7,UID:35b7d622-19c6-4eb8-b3f5-a50e0edf8417,ResourceVersion:11047096,Generation:0,CreationTimestamp:2020-05-15 14:09:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 749e5a70-c9f0-4417-bb5f-dfec357d6d3d 0xc0026aba67 0xc0026aba68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026abae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026abb00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-15 14:09:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.739: INFO: Pod "nginx-deployment-7b8c6f4498-4mw8s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4mw8s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-4mw8s,UID:1bc7419a-0c56-4c84-a209-0ab0fa841eb9,ResourceVersion:11047234,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0026abbd7 0xc0026abbd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026abc50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026abc70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-15 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.740: INFO: Pod "nginx-deployment-7b8c6f4498-749mh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-749mh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-749mh,UID:eca6f556-2301-4a03-8bd0-45f586a7b6a9,ResourceVersion:11047197,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0026abd37 0xc0026abd38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026abdb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026abdd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-15 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.740: INFO: Pod "nginx-deployment-7b8c6f4498-7qs9p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7qs9p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-7qs9p,UID:3f07f8b0-29a9-43eb-a656-9ae22fd1a20b,ResourceVersion:11047206,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0026abe97 0xc0026abe98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026abf10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026abf30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-15 14:09:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.740: INFO: Pod "nginx-deployment-7b8c6f4498-86d76" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-86d76,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-86d76,UID:e338f6f5-a67f-43d7-8d1b-445dfdceb462,ResourceVersion:11047171,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0026abff7 0xc0026abff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a4070} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a4090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.740: INFO: Pod "nginx-deployment-7b8c6f4498-9q6ns" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9q6ns,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-9q6ns,UID:1bedd588-f474-483a-9a30-2a871cd8e14d,ResourceVersion:11047015,Generation:0,CreationTimestamp:2020-05-15 14:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a4117 0xc0028a4118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a4190} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a41b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.126,StartTime:2020-05-15 14:09:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-15 14:09:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://735a6ce638dff4068e8421eb53f6301fcf89fa0f62ea8556b5ce8a4921939046}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.740: INFO: Pod "nginx-deployment-7b8c6f4498-btjdz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-btjdz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-btjdz,UID:77f23613-c248-4df2-99ed-f8fd666e147e,ResourceVersion:11047200,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a4297 0xc0028a4298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a4310} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a4330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-15 14:09:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.740: INFO: Pod "nginx-deployment-7b8c6f4498-fsgvq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fsgvq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-fsgvq,UID:e439606e-f512-41e6-b13f-c214936eb272,ResourceVersion:11047215,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a43f7 0xc0028a43f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a4470} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a4490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-15 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.741: INFO: Pod "nginx-deployment-7b8c6f4498-gg72w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gg72w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-gg72w,UID:df8c04d0-63d7-48e4-b333-dee2b07fff6f,ResourceVersion:11047235,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a4557 0xc0028a4558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a45d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a4600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-15 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.741: INFO: Pod "nginx-deployment-7b8c6f4498-hr9sh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hr9sh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-hr9sh,UID:1512680f-631c-449e-aa2c-4d475c6b62b3,ResourceVersion:11047029,Generation:0,CreationTimestamp:2020-05-15 14:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a46c7 0xc0028a46c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a4740} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a4760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.39,StartTime:2020-05-15 14:09:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-15 14:09:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://49b595180155ec3bd74313f2a1b157b37bae193b65af1a63ca13d3cda95fba28}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.741: INFO: Pod "nginx-deployment-7b8c6f4498-jk7pv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jk7pv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-jk7pv,UID:53ea49c7-b635-41c2-a14f-98febf3314b3,ResourceVersion:11047167,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a4857 0xc0028a4858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a48d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a48f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.741: INFO: Pod "nginx-deployment-7b8c6f4498-kdn8m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kdn8m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-kdn8m,UID:325ff126-7773-441f-9b6b-3fe4750cb8e9,ResourceVersion:11047042,Generation:0,CreationTimestamp:2020-05-15 14:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a4977 0xc0028a4978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a4a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a4a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.41,StartTime:2020-05-15 14:09:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-15 14:09:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9ab575ae4c50774dd043138510d9eabe591cc6cae23e15e4310c19d0bb38b7ef}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.741: INFO: Pod "nginx-deployment-7b8c6f4498-ksglj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ksglj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-ksglj,UID:1fe262e3-0dc1-4f7e-83a2-90fc5630a2ac,ResourceVersion:11047174,Generation:0,CreationTimestamp:2020-05-15 14:09:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a4af7 0xc0028a4af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a4b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a4b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-15 14:09:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.742: INFO: Pod "nginx-deployment-7b8c6f4498-nfn4q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nfn4q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-nfn4q,UID:cb4e4b4f-dd3e-4011-942f-d4250d703297,ResourceVersion:11047191,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a4c57 0xc0028a4c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a4cd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a4cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-15 14:09:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.742: INFO: Pod "nginx-deployment-7b8c6f4498-sb8jf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sb8jf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-sb8jf,UID:a6182649-284d-430a-a0e4-a10a886292d3,ResourceVersion:11047001,Generation:0,CreationTimestamp:2020-05-15 14:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a4db7 0xc0028a4db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a4e30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a4e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.125,StartTime:2020-05-15 14:09:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-15 14:09:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ccb9ac415fa2136e3a42c81c46edabe82e3760faca3d8660a0042c11922dff54}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.742: INFO: Pod "nginx-deployment-7b8c6f4498-shhnk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-shhnk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-shhnk,UID:e4ab4db2-d549-4d2f-b52a-e5c54cf1d6bd,ResourceVersion:11047052,Generation:0,CreationTimestamp:2020-05-15 14:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a4f27 0xc0028a4f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a4fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a4fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.128,StartTime:2020-05-15 14:09:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-15 14:09:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0dbfc7ae99cf2e1dc5e2cc5f66139d4485ea090b91fdcddcfa486d395421ee69}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.742: INFO: Pod "nginx-deployment-7b8c6f4498-sn2n5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sn2n5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-sn2n5,UID:4e9be6d0-57a0-4716-94a1-f65dc971b9e0,ResourceVersion:11047048,Generation:0,CreationTimestamp:2020-05-15 14:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a5097 0xc0028a5098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a5110} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a5140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.129,StartTime:2020-05-15 14:09:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-15 14:09:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c925231068a1e973c56a91441790c6de77731d54bc9b373497a6b562b9deac28}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.742: INFO: Pod "nginx-deployment-7b8c6f4498-spfvc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-spfvc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-spfvc,UID:40e2278c-8f67-4645-bfbe-e036916860ea,ResourceVersion:11047023,Generation:0,CreationTimestamp:2020-05-15 14:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a5217 0xc0028a5218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a5290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a52b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.127,StartTime:2020-05-15 14:09:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-15 14:09:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://60ac4f15baf1f25d92db5a8ec6d3e6dafc2cd2c2066e236ac8e8521b670f3ad1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.742: INFO: Pod "nginx-deployment-7b8c6f4498-t285w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t285w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-t285w,UID:e13c9d5a-a757-4960-ac9e-d060af340e24,ResourceVersion:11047229,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a5387 0xc0028a5388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a5400} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a5420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-15 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.742: INFO: Pod "nginx-deployment-7b8c6f4498-wj9qf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wj9qf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-wj9qf,UID:98de8763-5222-4ff5-a829-d821cb9c5d6e,ResourceVersion:11047221,Generation:0,CreationTimestamp:2020-05-15 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a54e7 0xc0028a54e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a5560} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a5590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-15 14:09:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 15 14:09:54.743: INFO: Pod "nginx-deployment-7b8c6f4498-zr7qf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zr7qf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6444,SelfLink:/api/v1/namespaces/deployment-6444/pods/nginx-deployment-7b8c6f4498-zr7qf,UID:a830167c-be2c-4c49-992b-065a6aee9677,ResourceVersion:11047033,Generation:0,CreationTimestamp:2020-05-15 14:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc3361f0-fa18-4cbd-8c1e-91c62e189661 0xc0028a5657 0xc0028a5658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qfpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qfpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qfpv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028a56d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028a56f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:09:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.40,StartTime:2020-05-15 14:09:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-15 14:09:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1e03a28017826f8a4dc9701e51dbed8ad9ade1541e6bce001a4a11c13e8af4cf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:09:54.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6444" for this suite. May 15 14:10:13.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:10:13.216: INFO: namespace deployment-6444 deletion completed in 18.343005429s • [SLOW TEST:36.554 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:10:13.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 15 14:10:13.527: INFO: Waiting up to 5m0s for pod "pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489" in namespace "emptydir-9745" to be "success or failure" May 15 14:10:13.759: INFO: Pod "pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489": Phase="Pending", Reason="", readiness=false. Elapsed: 232.083744ms May 15 14:10:15.764: INFO: Pod "pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236841871s May 15 14:10:17.879: INFO: Pod "pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352150634s May 15 14:10:19.883: INFO: Pod "pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489": Phase="Running", Reason="", readiness=true. Elapsed: 6.355689176s May 15 14:10:21.887: INFO: Pod "pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489": Phase="Running", Reason="", readiness=true. Elapsed: 8.359646422s May 15 14:10:23.890: INFO: Pod "pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489": Phase="Running", Reason="", readiness=true. Elapsed: 10.363402373s May 15 14:10:25.895: INFO: Pod "pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.367677814s STEP: Saw pod success May 15 14:10:25.895: INFO: Pod "pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489" satisfied condition "success or failure" May 15 14:10:25.898: INFO: Trying to get logs from node iruya-worker pod pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489 container test-container: STEP: delete the pod May 15 14:10:25.975: INFO: Waiting for pod pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489 to disappear May 15 14:10:26.065: INFO: Pod pod-d1b9f3ae-e46a-4cc8-b582-3a886eebe489 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:10:26.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9745" for this suite. May 15 14:10:32.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:10:32.156: INFO: namespace emptydir-9745 deletion completed in 6.087176876s • [SLOW TEST:18.939 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:10:32.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 14:10:32.232: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8bf5407-93a2-44f2-a7cf-ca76a053517c" in namespace "projected-9492" to be "success or failure" May 15 14:10:32.236: INFO: Pod "downwardapi-volume-e8bf5407-93a2-44f2-a7cf-ca76a053517c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.987959ms May 15 14:10:34.298: INFO: Pod "downwardapi-volume-e8bf5407-93a2-44f2-a7cf-ca76a053517c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066345643s May 15 14:10:36.303: INFO: Pod "downwardapi-volume-e8bf5407-93a2-44f2-a7cf-ca76a053517c": Phase="Running", Reason="", readiness=true. Elapsed: 4.071508104s May 15 14:10:38.308: INFO: Pod "downwardapi-volume-e8bf5407-93a2-44f2-a7cf-ca76a053517c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075876917s STEP: Saw pod success May 15 14:10:38.308: INFO: Pod "downwardapi-volume-e8bf5407-93a2-44f2-a7cf-ca76a053517c" satisfied condition "success or failure" May 15 14:10:38.311: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e8bf5407-93a2-44f2-a7cf-ca76a053517c container client-container: STEP: delete the pod May 15 14:10:38.353: INFO: Waiting for pod downwardapi-volume-e8bf5407-93a2-44f2-a7cf-ca76a053517c to disappear May 15 14:10:38.358: INFO: Pod downwardapi-volume-e8bf5407-93a2-44f2-a7cf-ca76a053517c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:10:38.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9492" for this suite. May 15 14:10:44.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:10:44.443: INFO: namespace projected-9492 deletion completed in 6.082539208s • [SLOW TEST:12.287 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:10:44.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 15 14:10:44.522: INFO: Waiting up to 5m0s for pod "downward-api-1eb9e2e4-4c60-43b3-8592-bde2b8c8856c" in namespace "downward-api-837" to be "success or failure" May 15 14:10:44.526: INFO: Pod "downward-api-1eb9e2e4-4c60-43b3-8592-bde2b8c8856c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172988ms May 15 14:10:46.530: INFO: Pod "downward-api-1eb9e2e4-4c60-43b3-8592-bde2b8c8856c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00789565s May 15 14:10:48.534: INFO: Pod "downward-api-1eb9e2e4-4c60-43b3-8592-bde2b8c8856c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011443076s STEP: Saw pod success May 15 14:10:48.534: INFO: Pod "downward-api-1eb9e2e4-4c60-43b3-8592-bde2b8c8856c" satisfied condition "success or failure" May 15 14:10:48.536: INFO: Trying to get logs from node iruya-worker pod downward-api-1eb9e2e4-4c60-43b3-8592-bde2b8c8856c container dapi-container: STEP: delete the pod May 15 14:10:48.559: INFO: Waiting for pod downward-api-1eb9e2e4-4c60-43b3-8592-bde2b8c8856c to disappear May 15 14:10:48.605: INFO: Pod downward-api-1eb9e2e4-4c60-43b3-8592-bde2b8c8856c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:10:48.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-837" for this suite. May 15 14:10:54.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:10:54.714: INFO: namespace downward-api-837 deletion completed in 6.10429229s • [SLOW TEST:10.269 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:10:54.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 14:10:54.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f169298-890d-4a39-96c5-88d5e7d1efad" in namespace "projected-4942" to be "success or failure" May 15 14:10:54.928: INFO: Pod "downwardapi-volume-2f169298-890d-4a39-96c5-88d5e7d1efad": Phase="Pending", Reason="", readiness=false. Elapsed: 49.924329ms May 15 14:10:56.932: INFO: Pod "downwardapi-volume-2f169298-890d-4a39-96c5-88d5e7d1efad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053692874s May 15 14:10:58.936: INFO: Pod "downwardapi-volume-2f169298-890d-4a39-96c5-88d5e7d1efad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057740729s STEP: Saw pod success May 15 14:10:58.936: INFO: Pod "downwardapi-volume-2f169298-890d-4a39-96c5-88d5e7d1efad" satisfied condition "success or failure" May 15 14:10:58.939: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2f169298-890d-4a39-96c5-88d5e7d1efad container client-container: STEP: delete the pod May 15 14:10:58.964: INFO: Waiting for pod downwardapi-volume-2f169298-890d-4a39-96c5-88d5e7d1efad to disappear May 15 14:10:58.982: INFO: Pod downwardapi-volume-2f169298-890d-4a39-96c5-88d5e7d1efad no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:10:58.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4942" for this suite. May 15 14:11:05.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:11:05.100: INFO: namespace projected-4942 deletion completed in 6.081675817s • [SLOW TEST:10.387 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:11:05.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 14:11:05.175: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:11:09.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4097" for this suite. May 15 14:11:53.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:11:53.373: INFO: namespace pods-4097 deletion completed in 44.12904444s • [SLOW TEST:48.273 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:11:53.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1e6ef0f3-11cd-4708-8b46-390c54d6cb8c STEP: Creating a pod to test consume configMaps May 15 14:11:53.451: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0844d5f7-7e7e-4b31-8c79-2e30080035b2" in namespace "projected-9473" to be "success or failure" May 15 14:11:53.472: INFO: Pod "pod-projected-configmaps-0844d5f7-7e7e-4b31-8c79-2e30080035b2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.521718ms May 15 14:11:55.475: INFO: Pod "pod-projected-configmaps-0844d5f7-7e7e-4b31-8c79-2e30080035b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023928661s May 15 14:11:57.479: INFO: Pod "pod-projected-configmaps-0844d5f7-7e7e-4b31-8c79-2e30080035b2": Phase="Running", Reason="", readiness=true. Elapsed: 4.027733839s May 15 14:11:59.483: INFO: Pod "pod-projected-configmaps-0844d5f7-7e7e-4b31-8c79-2e30080035b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031593045s STEP: Saw pod success May 15 14:11:59.483: INFO: Pod "pod-projected-configmaps-0844d5f7-7e7e-4b31-8c79-2e30080035b2" satisfied condition "success or failure" May 15 14:11:59.488: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-0844d5f7-7e7e-4b31-8c79-2e30080035b2 container projected-configmap-volume-test: STEP: delete the pod May 15 14:11:59.502: INFO: Waiting for pod pod-projected-configmaps-0844d5f7-7e7e-4b31-8c79-2e30080035b2 to disappear May 15 14:11:59.507: INFO: Pod pod-projected-configmaps-0844d5f7-7e7e-4b31-8c79-2e30080035b2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:11:59.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9473" for this suite. May 15 14:12:05.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:12:05.594: INFO: namespace projected-9473 deletion completed in 6.084219871s • [SLOW TEST:12.221 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:12:05.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 14:12:05.828: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 16.131712ms) May 15 14:12:05.831: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.013644ms) May 15 14:12:05.833: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.780567ms) May 15 14:12:05.836: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.242333ms) May 15 14:12:05.838: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.69457ms) May 15 14:12:05.842: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.329692ms) May 15 14:12:05.845: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.990399ms) May 15 14:12:05.847: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.449224ms) May 15 14:12:05.870: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 22.235156ms) May 15 14:12:05.873: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.745424ms) May 15 14:12:05.877: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.876961ms) May 15 14:12:05.881: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.68117ms) May 15 14:12:05.884: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.100366ms) May 15 14:12:05.887: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.961301ms) May 15 14:12:05.890: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.019129ms) May 15 14:12:05.893: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.106256ms) May 15 14:12:05.896: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.813086ms) May 15 14:12:05.899: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.726047ms) May 15 14:12:05.902: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.977378ms) May 15 14:12:05.905: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.540472ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:12:05.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1844" for this suite. May 15 14:12:11.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:12:11.995: INFO: namespace proxy-1844 deletion completed in 6.087236722s • [SLOW TEST:6.400 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:12:11.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 15 14:12:12.081: INFO: Waiting up to 5m0s for pod "pod-b807daa5-6e6a-4783-b56c-c50fff00aff5" in namespace "emptydir-7362" to be "success or failure" May 15 14:12:12.095: INFO: Pod "pod-b807daa5-6e6a-4783-b56c-c50fff00aff5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.853356ms May 15 14:12:14.228: INFO: Pod "pod-b807daa5-6e6a-4783-b56c-c50fff00aff5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147280574s May 15 14:12:16.234: INFO: Pod "pod-b807daa5-6e6a-4783-b56c-c50fff00aff5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153316755s STEP: Saw pod success May 15 14:12:16.234: INFO: Pod "pod-b807daa5-6e6a-4783-b56c-c50fff00aff5" satisfied condition "success or failure" May 15 14:12:16.236: INFO: Trying to get logs from node iruya-worker pod pod-b807daa5-6e6a-4783-b56c-c50fff00aff5 container test-container: STEP: delete the pod May 15 14:12:16.391: INFO: Waiting for pod pod-b807daa5-6e6a-4783-b56c-c50fff00aff5 to disappear May 15 14:12:16.414: INFO: Pod pod-b807daa5-6e6a-4783-b56c-c50fff00aff5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:12:16.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7362" for this suite. May 15 14:12:22.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:12:22.548: INFO: namespace emptydir-7362 deletion completed in 6.130697862s • [SLOW TEST:10.553 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:12:22.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 15 14:12:29.466: INFO: 10 pods remaining May 15 14:12:29.466: INFO: 10 pods has nil DeletionTimestamp May 15 14:12:29.466: INFO: May 15 14:12:29.938: INFO: 0 pods remaining May 15 14:12:29.938: INFO: 0 pods has nil DeletionTimestamp May 15 14:12:29.938: INFO: May 15 14:12:31.528: INFO: 0 pods remaining May 15 14:12:31.528: INFO: 0 pods has nil DeletionTimestamp May 15 14:12:31.528: INFO: STEP: Gathering metrics W0515 14:12:32.588209 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 14:12:32.588: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:12:32.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9711" for this suite. May 15 14:12:39.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:12:39.206: INFO: namespace gc-9711 deletion completed in 6.257624307s • [SLOW TEST:16.657 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:12:39.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-7c761e62-8ef9-4d04-aa0f-fb3614ab5180 STEP: Creating configMap with name cm-test-opt-upd-eb2bab50-4f7d-4bd8-849a-c87b0c7ccf93 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7c761e62-8ef9-4d04-aa0f-fb3614ab5180 STEP: Updating configmap cm-test-opt-upd-eb2bab50-4f7d-4bd8-849a-c87b0c7ccf93 STEP: Creating configMap with name cm-test-opt-create-7fe9db6c-cbeb-4d56-974a-1dd814d78262 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:12:47.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1989" for this suite. May 15 14:13:09.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:13:09.523: INFO: namespace configmap-1989 deletion completed in 22.086990177s • [SLOW TEST:30.317 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:13:09.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-pzd9 STEP: Creating a pod to test atomic-volume-subpath May 15 14:13:09.675: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pzd9" in namespace "subpath-820" to be "success or failure" May 15 14:13:09.677: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.45694ms May 15 14:13:11.780: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104787611s May 15 14:13:13.784: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Running", Reason="", readiness=true. Elapsed: 4.108621315s May 15 14:13:15.788: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Running", Reason="", readiness=true. Elapsed: 6.113284923s May 15 14:13:17.792: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Running", Reason="", readiness=true. Elapsed: 8.116774205s May 15 14:13:19.796: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Running", Reason="", readiness=true. Elapsed: 10.121328718s May 15 14:13:21.802: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Running", Reason="", readiness=true. Elapsed: 12.126953657s May 15 14:13:23.805: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Running", Reason="", readiness=true. Elapsed: 14.130235249s May 15 14:13:25.809: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Running", Reason="", readiness=true. Elapsed: 16.134368442s May 15 14:13:27.814: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Running", Reason="", readiness=true. Elapsed: 18.138811237s May 15 14:13:29.818: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Running", Reason="", readiness=true. Elapsed: 20.142731379s May 15 14:13:31.822: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Running", Reason="", readiness=true. Elapsed: 22.147057046s May 15 14:13:33.828: INFO: Pod "pod-subpath-test-projected-pzd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.153362638s STEP: Saw pod success May 15 14:13:33.828: INFO: Pod "pod-subpath-test-projected-pzd9" satisfied condition "success or failure" May 15 14:13:33.833: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-pzd9 container test-container-subpath-projected-pzd9: STEP: delete the pod May 15 14:13:33.885: INFO: Waiting for pod pod-subpath-test-projected-pzd9 to disappear May 15 14:13:33.914: INFO: Pod pod-subpath-test-projected-pzd9 no longer exists STEP: Deleting pod pod-subpath-test-projected-pzd9 May 15 14:13:33.914: INFO: Deleting pod "pod-subpath-test-projected-pzd9" in namespace "subpath-820" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:13:33.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-820" for this suite. May 15 14:13:39.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:13:40.062: INFO: namespace subpath-820 deletion completed in 6.107084752s • [SLOW TEST:30.538 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:13:40.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 14:13:40.146: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 15 14:13:40.152: INFO: Number of nodes with available pods: 0 May 15 14:13:40.152: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 15 14:13:40.212: INFO: Number of nodes with available pods: 0 May 15 14:13:40.212: INFO: Node iruya-worker is running more than one daemon pod May 15 14:13:41.216: INFO: Number of nodes with available pods: 0 May 15 14:13:41.216: INFO: Node iruya-worker is running more than one daemon pod May 15 14:13:42.254: INFO: Number of nodes with available pods: 0 May 15 14:13:42.254: INFO: Node iruya-worker is running more than one daemon pod May 15 14:13:43.216: INFO: Number of nodes with available pods: 0 May 15 14:13:43.216: INFO: Node iruya-worker is running more than one daemon pod May 15 14:13:44.217: INFO: Number of nodes with available pods: 1 May 15 14:13:44.217: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 15 14:13:44.248: INFO: Number of nodes with available pods: 1 May 15 14:13:44.248: INFO: Number of running nodes: 0, number of available pods: 1 May 15 14:13:45.254: INFO: Number of nodes with available pods: 0 May 15 14:13:45.254: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 15 14:13:45.265: INFO: Number of nodes with available pods: 0 May 15 14:13:45.265: INFO: Node iruya-worker is running more than one daemon pod May 15 14:13:46.332: INFO: Number of nodes with available pods: 0 May 15 14:13:46.332: INFO: Node iruya-worker is running more than one daemon pod May 15 14:13:47.269: INFO: Number of nodes with available pods: 0 May 15 14:13:47.269: INFO: Node iruya-worker is running more than one daemon pod May 15 14:13:48.270: INFO: Number of nodes with available pods: 0 May 15 14:13:48.270: INFO: Node iruya-worker is running more than one daemon pod May 15 14:13:49.268: INFO: Number of nodes with available pods: 0 May 15 14:13:49.268: INFO: Node iruya-worker is running more than one daemon pod May 15 14:13:50.332: INFO: Number of nodes with available pods: 0 May 15 14:13:50.332: INFO: Node iruya-worker is running more than one daemon pod May 15 14:13:51.269: INFO: Number of nodes with available pods: 0 May 15 14:13:51.269: INFO: Node iruya-worker is running more than one daemon pod May 15 14:13:52.270: INFO: Number of nodes with available pods: 1 May 15 14:13:52.270: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5428, will wait for the garbage collector to delete the pods May 15 14:13:52.335: INFO: Deleting DaemonSet.extensions daemon-set took: 7.141452ms May 15 14:13:52.636: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27719ms May 15 14:14:02.239: INFO: Number of nodes with available pods: 0 May 15 14:14:02.239: INFO: Number of running nodes: 0, number of available pods: 0 May 15 14:14:02.241: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5428/daemonsets","resourceVersion":"11048421"},"items":null} May 15 14:14:02.244: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5428/pods","resourceVersion":"11048421"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:14:02.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5428" for this suite. May 15 14:14:08.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:14:08.448: INFO: namespace daemonsets-5428 deletion completed in 6.143148103s • [SLOW TEST:28.386 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:14:08.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-89496741-7c45-4942-bb39-8e44dd2b4331 STEP: Creating a pod to test consume secrets May 15 14:14:08.574: INFO: Waiting up to 5m0s for pod "pod-secrets-7c1fb7f9-9b70-4e09-99b3-2e2fa2014c5c" in namespace "secrets-3738" to be "success or failure" May 15 14:14:08.578: INFO: Pod "pod-secrets-7c1fb7f9-9b70-4e09-99b3-2e2fa2014c5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.473093ms May 15 14:14:10.650: INFO: Pod "pod-secrets-7c1fb7f9-9b70-4e09-99b3-2e2fa2014c5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075915563s May 15 14:14:12.991: INFO: Pod "pod-secrets-7c1fb7f9-9b70-4e09-99b3-2e2fa2014c5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.417517173s STEP: Saw pod success May 15 14:14:12.991: INFO: Pod "pod-secrets-7c1fb7f9-9b70-4e09-99b3-2e2fa2014c5c" satisfied condition "success or failure" May 15 14:14:12.994: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-7c1fb7f9-9b70-4e09-99b3-2e2fa2014c5c container secret-volume-test: STEP: delete the pod May 15 14:14:13.028: INFO: Waiting for pod pod-secrets-7c1fb7f9-9b70-4e09-99b3-2e2fa2014c5c to disappear May 15 14:14:13.397: INFO: Pod pod-secrets-7c1fb7f9-9b70-4e09-99b3-2e2fa2014c5c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:14:13.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3738" for this suite. May 15 14:14:19.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:14:19.547: INFO: namespace secrets-3738 deletion completed in 6.14560465s • [SLOW TEST:11.098 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:14:19.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-1662/secret-test-6124ba2a-33a3-415b-b7fa-ae848fdb944f STEP: Creating a pod to test consume secrets May 15 14:14:19.622: INFO: Waiting up to 5m0s for pod "pod-configmaps-6aa9b529-2b4d-4dfe-b519-cdf62a2f3bc9" in namespace "secrets-1662" to be "success or failure" May 15 14:14:19.625: INFO: Pod "pod-configmaps-6aa9b529-2b4d-4dfe-b519-cdf62a2f3bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.944447ms May 15 14:14:21.643: INFO: Pod "pod-configmaps-6aa9b529-2b4d-4dfe-b519-cdf62a2f3bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021354818s May 15 14:14:23.648: INFO: Pod "pod-configmaps-6aa9b529-2b4d-4dfe-b519-cdf62a2f3bc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026339464s STEP: Saw pod success May 15 14:14:23.648: INFO: Pod "pod-configmaps-6aa9b529-2b4d-4dfe-b519-cdf62a2f3bc9" satisfied condition "success or failure" May 15 14:14:23.651: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-6aa9b529-2b4d-4dfe-b519-cdf62a2f3bc9 container env-test: STEP: delete the pod May 15 14:14:23.669: INFO: Waiting for pod pod-configmaps-6aa9b529-2b4d-4dfe-b519-cdf62a2f3bc9 to disappear May 15 14:14:23.673: INFO: Pod pod-configmaps-6aa9b529-2b4d-4dfe-b519-cdf62a2f3bc9 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:14:23.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1662" for this suite. May 15 14:14:29.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:14:29.958: INFO: namespace secrets-1662 deletion completed in 6.281450205s • [SLOW TEST:10.410 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:14:29.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5734 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-5734 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5734 May 15 14:14:30.155: INFO: Found 0 stateful pods, waiting for 1 May 15 14:14:40.159: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 15 14:14:40.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5734 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 15 14:14:43.738: INFO: stderr: "I0515 14:14:43.605570 2428 log.go:172] (0xc000c00630) (0xc000bd2820) Create stream\nI0515 14:14:43.605605 2428 log.go:172] (0xc000c00630) (0xc000bd2820) Stream added, broadcasting: 1\nI0515 14:14:43.607983 2428 log.go:172] (0xc000c00630) Reply frame received for 1\nI0515 14:14:43.608015 2428 log.go:172] (0xc000c00630) (0xc00082c000) Create stream\nI0515 14:14:43.608025 2428 log.go:172] (0xc000c00630) (0xc00082c000) Stream added, broadcasting: 3\nI0515 14:14:43.609425 2428 log.go:172] (0xc000c00630) Reply frame received for 3\nI0515 14:14:43.609467 2428 log.go:172] (0xc000c00630) (0xc000bd28c0) Create stream\nI0515 14:14:43.609480 2428 log.go:172] (0xc000c00630) (0xc000bd28c0) Stream added, broadcasting: 5\nI0515 14:14:43.610293 2428 log.go:172] (0xc000c00630) Reply frame received for 5\nI0515 14:14:43.699705 2428 log.go:172] (0xc000c00630) Data frame received for 5\nI0515 14:14:43.699729 2428 log.go:172] (0xc000bd28c0) (5) Data frame handling\nI0515 14:14:43.699753 2428 log.go:172] (0xc000bd28c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0515 14:14:43.729599 2428 log.go:172] (0xc000c00630) Data frame received for 3\nI0515 14:14:43.729633 2428 log.go:172] (0xc00082c000) (3) Data frame handling\nI0515 14:14:43.729652 2428 log.go:172] (0xc00082c000) (3) Data frame sent\nI0515 14:14:43.729786 2428 log.go:172] (0xc000c00630) Data frame received for 5\nI0515 14:14:43.729804 2428 log.go:172] (0xc000bd28c0) (5) Data frame handling\nI0515 14:14:43.729840 2428 log.go:172] (0xc000c00630) Data frame received for 3\nI0515 14:14:43.729870 2428 log.go:172] (0xc00082c000) (3) Data frame handling\nI0515 14:14:43.731778 2428 log.go:172] (0xc000c00630) Data frame received for 1\nI0515 14:14:43.731804 2428 log.go:172] (0xc000bd2820) (1) Data frame handling\nI0515 14:14:43.731829 2428 log.go:172] (0xc000bd2820) (1) Data frame sent\nI0515 14:14:43.731854 2428 log.go:172] (0xc000c00630) (0xc000bd2820) Stream removed, broadcasting: 1\nI0515 14:14:43.731884 2428 log.go:172] (0xc000c00630) Go away received\nI0515 14:14:43.732309 2428 log.go:172] (0xc000c00630) (0xc000bd2820) Stream removed, broadcasting: 1\nI0515 14:14:43.732328 2428 log.go:172] (0xc000c00630) (0xc00082c000) Stream removed, broadcasting: 3\nI0515 14:14:43.732336 2428 log.go:172] (0xc000c00630) (0xc000bd28c0) Stream removed, broadcasting: 5\n" May 15 14:14:43.738: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 15 14:14:43.738: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 15 14:14:43.741: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 15 14:14:53.747: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 14:14:53.747: INFO: Waiting for statefulset status.replicas updated to 0 May 15 14:14:53.765: INFO: POD NODE PHASE GRACE CONDITIONS May 15 14:14:53.765: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC }] May 15 14:14:53.765: INFO: May 15 14:14:53.765: INFO: StatefulSet ss has not reached scale 3, at 1 May 15 14:14:54.771: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99183998s May 15 14:14:55.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986371156s May 15 14:14:56.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.795056896s May 15 14:14:57.972: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.789840114s May 15 14:14:58.978: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.785053451s May 15 14:14:59.983: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.778983853s May 15 14:15:00.988: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.77390511s May 15 14:15:01.994: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.768767445s May 15 14:15:03.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 763.292098ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5734 May 15 14:15:04.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5734 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 15 14:15:04.558: INFO: stderr: "I0515 14:15:04.466315 2461 log.go:172] (0xc0008502c0) (0xc0008f45a0) Create stream\nI0515 14:15:04.466352 2461 log.go:172] (0xc0008502c0) (0xc0008f45a0) Stream added, broadcasting: 1\nI0515 14:15:04.467784 2461 log.go:172] (0xc0008502c0) Reply frame received for 1\nI0515 14:15:04.467808 2461 log.go:172] (0xc0008502c0) (0xc000308320) Create stream\nI0515 14:15:04.467816 2461 log.go:172] (0xc0008502c0) (0xc000308320) Stream added, broadcasting: 3\nI0515 14:15:04.468417 2461 log.go:172] (0xc0008502c0) Reply frame received for 3\nI0515 14:15:04.468444 2461 log.go:172] (0xc0008502c0) (0xc000954000) Create stream\nI0515 14:15:04.468457 2461 log.go:172] (0xc0008502c0) (0xc000954000) Stream added, broadcasting: 5\nI0515 14:15:04.469019 2461 log.go:172] (0xc0008502c0) Reply frame received for 5\nI0515 14:15:04.552116 2461 log.go:172] (0xc0008502c0) Data frame received for 5\nI0515 14:15:04.552153 2461 log.go:172] (0xc000954000) (5) Data frame handling\nI0515 14:15:04.552164 2461 log.go:172] (0xc000954000) (5) Data frame sent\nI0515 14:15:04.552173 2461 log.go:172] (0xc0008502c0) Data frame received for 5\nI0515 14:15:04.552181 2461 log.go:172] (0xc000954000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0515 14:15:04.552205 2461 log.go:172] (0xc0008502c0) Data frame received for 3\nI0515 14:15:04.552215 2461 log.go:172] (0xc000308320) (3) Data frame handling\nI0515 14:15:04.552228 2461 log.go:172] (0xc000308320) (3) Data frame sent\nI0515 14:15:04.552240 2461 log.go:172] (0xc0008502c0) Data frame received for 3\nI0515 14:15:04.552248 2461 log.go:172] (0xc000308320) (3) Data frame handling\nI0515 14:15:04.553638 2461 log.go:172] (0xc0008502c0) Data frame received for 1\nI0515 14:15:04.553664 2461 log.go:172] (0xc0008f45a0) (1) Data frame handling\nI0515 14:15:04.553682 2461 log.go:172] (0xc0008f45a0) (1) Data frame sent\nI0515 14:15:04.553698 2461 log.go:172] (0xc0008502c0) (0xc0008f45a0) Stream removed, broadcasting: 1\nI0515 14:15:04.553714 2461 log.go:172] (0xc0008502c0) Go away received\nI0515 14:15:04.554018 2461 log.go:172] (0xc0008502c0) (0xc0008f45a0) Stream removed, broadcasting: 1\nI0515 14:15:04.554041 2461 log.go:172] (0xc0008502c0) (0xc000308320) Stream removed, broadcasting: 3\nI0515 14:15:04.554054 2461 log.go:172] (0xc0008502c0) (0xc000954000) Stream removed, broadcasting: 5\n" May 15 14:15:04.558: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 15 14:15:04.558: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 15 14:15:04.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5734 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 15 14:15:04.750: INFO: stderr: "I0515 14:15:04.679996 2481 log.go:172] (0xc000650420) (0xc00021e820) Create stream\nI0515 14:15:04.680038 2481 log.go:172] (0xc000650420) (0xc00021e820) Stream added, broadcasting: 1\nI0515 14:15:04.681527 2481 log.go:172] (0xc000650420) Reply frame received for 1\nI0515 14:15:04.681546 2481 log.go:172] (0xc000650420) (0xc0006ee000) Create stream\nI0515 14:15:04.681554 2481 log.go:172] (0xc000650420) (0xc0006ee000) Stream added, broadcasting: 3\nI0515 14:15:04.682060 2481 log.go:172] (0xc000650420) Reply frame received for 3\nI0515 14:15:04.682081 2481 log.go:172] (0xc000650420) (0xc00021e8c0) Create stream\nI0515 14:15:04.682087 2481 log.go:172] (0xc000650420) (0xc00021e8c0) Stream added, broadcasting: 5\nI0515 14:15:04.682636 2481 log.go:172] (0xc000650420) Reply frame received for 5\nI0515 14:15:04.742848 2481 log.go:172] (0xc000650420) Data frame received for 3\nI0515 14:15:04.742885 2481 log.go:172] (0xc0006ee000) (3) Data frame handling\nI0515 14:15:04.742901 2481 log.go:172] (0xc0006ee000) (3) Data frame sent\nI0515 14:15:04.746165 2481 log.go:172] (0xc000650420) Data frame received for 5\nI0515 14:15:04.746207 2481 log.go:172] (0xc00021e8c0) (5) Data frame handling\nI0515 14:15:04.746229 2481 log.go:172] (0xc00021e8c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0515 14:15:04.746246 2481 log.go:172] (0xc000650420) Data frame received for 3\nI0515 14:15:04.746425 2481 log.go:172] (0xc0006ee000) (3) Data frame handling\nI0515 14:15:04.746454 2481 log.go:172] (0xc000650420) Data frame received for 1\nI0515 14:15:04.746472 2481 log.go:172] (0xc00021e820) (1) Data frame handling\nI0515 14:15:04.746491 2481 log.go:172] (0xc00021e820) (1) Data frame sent\nI0515 14:15:04.746508 2481 log.go:172] (0xc000650420) (0xc00021e820) Stream removed, broadcasting: 1\nI0515 14:15:04.746584 2481 log.go:172] (0xc000650420) Data frame received for 5\nI0515 14:15:04.746600 2481 log.go:172] (0xc00021e8c0) (5) Data frame handling\nI0515 14:15:04.746625 2481 log.go:172] (0xc000650420) Go away received\nI0515 14:15:04.747232 2481 log.go:172] (0xc000650420) (0xc00021e820) Stream removed, broadcasting: 1\nI0515 14:15:04.747256 2481 log.go:172] (0xc000650420) (0xc0006ee000) Stream removed, broadcasting: 3\nI0515 14:15:04.747268 2481 log.go:172] (0xc000650420) (0xc00021e8c0) Stream removed, broadcasting: 5\n" May 15 14:15:04.750: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 15 14:15:04.750: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 15 14:15:04.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5734 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 15 14:15:04.931: INFO: stderr: "I0515 14:15:04.864387 2501 log.go:172] (0xc0008d6160) (0xc000556640) Create stream\nI0515 14:15:04.864433 2501 log.go:172] (0xc0008d6160) (0xc000556640) Stream added, broadcasting: 1\nI0515 14:15:04.866242 2501 log.go:172] (0xc0008d6160) Reply frame received for 1\nI0515 14:15:04.866262 2501 log.go:172] (0xc0008d6160) (0xc0005566e0) Create stream\nI0515 14:15:04.866272 2501 log.go:172] (0xc0008d6160) (0xc0005566e0) Stream added, broadcasting: 3\nI0515 14:15:04.866776 2501 log.go:172] (0xc0008d6160) Reply frame received for 3\nI0515 14:15:04.866790 2501 log.go:172] (0xc0008d6160) (0xc000556780) Create stream\nI0515 14:15:04.866805 2501 log.go:172] (0xc0008d6160) (0xc000556780) Stream added, broadcasting: 5\nI0515 14:15:04.867275 2501 log.go:172] (0xc0008d6160) Reply frame received for 5\nI0515 14:15:04.925340 2501 log.go:172] (0xc0008d6160) Data frame received for 3\nI0515 14:15:04.925374 2501 log.go:172] (0xc0005566e0) (3) Data frame handling\nI0515 14:15:04.925392 2501 log.go:172] (0xc0005566e0) (3) Data frame sent\nI0515 14:15:04.925414 2501 log.go:172] (0xc0008d6160) Data frame received for 5\nI0515 14:15:04.925422 2501 log.go:172] (0xc000556780) (5) Data frame handling\nI0515 14:15:04.925431 2501 log.go:172] (0xc000556780) (5) Data frame sent\nI0515 14:15:04.925439 2501 log.go:172] (0xc0008d6160) Data frame received for 5\nI0515 14:15:04.925446 2501 log.go:172] (0xc000556780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0515 14:15:04.925462 2501 log.go:172] (0xc000556780) (5) Data frame sent\nI0515 14:15:04.925471 2501 log.go:172] (0xc0008d6160) Data frame received for 5\nI0515 14:15:04.925478 2501 log.go:172] (0xc000556780) (5) Data frame handling\nI0515 14:15:04.925490 2501 log.go:172] (0xc000556780) (5) Data frame sent\n+ true\nI0515 14:15:04.925693 2501 log.go:172] (0xc0008d6160) Data frame received for 3\nI0515 14:15:04.925711 2501 log.go:172] (0xc0005566e0) (3) Data frame handling\nI0515 14:15:04.925912 2501 log.go:172] (0xc0008d6160) Data frame received for 5\nI0515 14:15:04.925928 2501 log.go:172] (0xc000556780) (5) Data frame handling\nI0515 14:15:04.927224 2501 log.go:172] (0xc0008d6160) Data frame received for 1\nI0515 14:15:04.927238 2501 log.go:172] (0xc000556640) (1) Data frame handling\nI0515 14:15:04.927247 2501 log.go:172] (0xc000556640) (1) Data frame sent\nI0515 14:15:04.927257 2501 log.go:172] (0xc0008d6160) (0xc000556640) Stream removed, broadcasting: 1\nI0515 14:15:04.927270 2501 log.go:172] (0xc0008d6160) Go away received\nI0515 14:15:04.927619 2501 log.go:172] (0xc0008d6160) (0xc000556640) Stream removed, broadcasting: 1\nI0515 14:15:04.927634 2501 log.go:172] (0xc0008d6160) (0xc0005566e0) Stream removed, broadcasting: 3\nI0515 14:15:04.927642 2501 log.go:172] (0xc0008d6160) (0xc000556780) Stream removed, broadcasting: 5\n" May 15 14:15:04.931: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 15 14:15:04.931: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 15 14:15:04.934: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 15 14:15:04.934: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 15 14:15:04.934: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 15 14:15:04.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5734 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 15 14:15:05.127: INFO: stderr: "I0515 14:15:05.053381 2520 log.go:172] (0xc00094e630) (0xc000632aa0) Create stream\nI0515 14:15:05.053423 2520 log.go:172] (0xc00094e630) (0xc000632aa0) Stream added, broadcasting: 1\nI0515 14:15:05.054976 2520 log.go:172] (0xc00094e630) Reply frame received for 1\nI0515 14:15:05.055018 2520 log.go:172] (0xc00094e630) (0xc000934000) Create stream\nI0515 14:15:05.055038 2520 log.go:172] (0xc00094e630) (0xc000934000) Stream added, broadcasting: 3\nI0515 14:15:05.055697 2520 log.go:172] (0xc00094e630) Reply frame received for 3\nI0515 14:15:05.055721 2520 log.go:172] (0xc00094e630) (0xc0009340a0) Create stream\nI0515 14:15:05.055729 2520 log.go:172] (0xc00094e630) (0xc0009340a0) Stream added, broadcasting: 5\nI0515 14:15:05.056337 2520 log.go:172] (0xc00094e630) Reply frame received for 5\nI0515 14:15:05.122767 2520 log.go:172] (0xc00094e630) Data frame received for 5\nI0515 14:15:05.122805 2520 log.go:172] (0xc0009340a0) (5) Data frame handling\nI0515 14:15:05.122824 2520 log.go:172] (0xc0009340a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0515 14:15:05.122878 2520 log.go:172] (0xc00094e630) Data frame received for 5\nI0515 14:15:05.122903 2520 log.go:172] (0xc0009340a0) (5) Data frame handling\nI0515 14:15:05.122951 2520 log.go:172] (0xc00094e630) Data frame received for 3\nI0515 14:15:05.122970 2520 log.go:172] (0xc000934000) (3) Data frame handling\nI0515 14:15:05.122985 2520 log.go:172] (0xc000934000) (3) Data frame sent\nI0515 14:15:05.122997 2520 log.go:172] (0xc00094e630) Data frame received for 3\nI0515 14:15:05.123012 2520 log.go:172] (0xc000934000) (3) Data frame handling\nI0515 14:15:05.124211 2520 log.go:172] (0xc00094e630) Data frame received for 1\nI0515 14:15:05.124243 2520 log.go:172] (0xc000632aa0) (1) Data frame handling\nI0515 14:15:05.124272 2520 log.go:172] (0xc000632aa0) (1) Data frame sent\nI0515 14:15:05.124294 2520 log.go:172] (0xc00094e630) (0xc000632aa0) Stream removed, broadcasting: 1\nI0515 14:15:05.124315 2520 log.go:172] (0xc00094e630) Go away received\nI0515 14:15:05.124734 2520 log.go:172] (0xc00094e630) (0xc000632aa0) Stream removed, broadcasting: 1\nI0515 14:15:05.124751 2520 log.go:172] (0xc00094e630) (0xc000934000) Stream removed, broadcasting: 3\nI0515 14:15:05.124760 2520 log.go:172] (0xc00094e630) (0xc0009340a0) Stream removed, broadcasting: 5\n" May 15 14:15:05.128: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 15 14:15:05.128: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 15 14:15:05.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5734 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 15 14:15:05.381: INFO: stderr: "I0515 14:15:05.258774 2540 log.go:172] (0xc000908420) (0xc0007ae640) Create stream\nI0515 14:15:05.258807 2540 log.go:172] (0xc000908420) (0xc0007ae640) Stream added, broadcasting: 1\nI0515 14:15:05.265691 2540 log.go:172] (0xc000908420) Reply frame received for 1\nI0515 14:15:05.265788 2540 log.go:172] (0xc000908420) (0xc0008cc000) Create stream\nI0515 14:15:05.265811 2540 log.go:172] (0xc000908420) (0xc0008cc000) Stream added, broadcasting: 3\nI0515 14:15:05.268713 2540 log.go:172] (0xc000908420) Reply frame received for 3\nI0515 14:15:05.268743 2540 log.go:172] (0xc000908420) (0xc0007ae6e0) Create stream\nI0515 14:15:05.268752 2540 log.go:172] (0xc000908420) (0xc0007ae6e0) Stream added, broadcasting: 5\nI0515 14:15:05.270622 2540 log.go:172] (0xc000908420) Reply frame received for 5\nI0515 14:15:05.329901 2540 log.go:172] (0xc000908420) Data frame received for 5\nI0515 14:15:05.329923 2540 log.go:172] (0xc0007ae6e0) (5) Data frame handling\nI0515 14:15:05.329937 2540 log.go:172] (0xc0007ae6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0515 14:15:05.375397 2540 log.go:172] (0xc000908420) Data frame received for 3\nI0515 14:15:05.375418 2540 log.go:172] (0xc0008cc000) (3) Data frame handling\nI0515 14:15:05.375434 2540 log.go:172] (0xc0008cc000) (3) Data frame sent\nI0515 14:15:05.375440 2540 log.go:172] (0xc000908420) Data frame received for 3\nI0515 14:15:05.375446 2540 log.go:172] (0xc0008cc000) (3) Data frame handling\nI0515 14:15:05.375537 2540 log.go:172] (0xc000908420) Data frame received for 5\nI0515 14:15:05.375552 2540 log.go:172] (0xc0007ae6e0) (5) Data frame handling\nI0515 14:15:05.376829 2540 log.go:172] (0xc000908420) Data frame received for 1\nI0515 14:15:05.376846 2540 log.go:172] (0xc0007ae640) (1) Data frame handling\nI0515 14:15:05.376860 2540 log.go:172] (0xc0007ae640) (1) Data frame sent\nI0515 14:15:05.376916 2540 log.go:172] (0xc000908420) (0xc0007ae640) Stream removed, broadcasting: 1\nI0515 14:15:05.376929 2540 log.go:172] (0xc000908420) Go away received\nI0515 14:15:05.377420 2540 log.go:172] (0xc000908420) (0xc0007ae640) Stream removed, broadcasting: 1\nI0515 14:15:05.377432 2540 log.go:172] (0xc000908420) (0xc0008cc000) Stream removed, broadcasting: 3\nI0515 14:15:05.377438 2540 log.go:172] (0xc000908420) (0xc0007ae6e0) Stream removed, broadcasting: 5\n" May 15 14:15:05.381: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 15 14:15:05.381: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 15 14:15:05.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5734 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 15 14:15:05.574: INFO: stderr: "I0515 14:15:05.490239 2561 log.go:172] (0xc0009602c0) (0xc000844640) Create stream\nI0515 14:15:05.490298 2561 log.go:172] (0xc0009602c0) (0xc000844640) Stream added, broadcasting: 1\nI0515 14:15:05.492315 2561 log.go:172] (0xc0009602c0) Reply frame received for 1\nI0515 14:15:05.492390 2561 log.go:172] (0xc0009602c0) (0xc0008446e0) Create stream\nI0515 14:15:05.492413 2561 log.go:172] (0xc0009602c0) (0xc0008446e0) Stream added, broadcasting: 3\nI0515 14:15:05.493571 2561 log.go:172] (0xc0009602c0) Reply frame received for 3\nI0515 14:15:05.493610 2561 log.go:172] (0xc0009602c0) (0xc000918000) Create stream\nI0515 14:15:05.493623 2561 log.go:172] (0xc0009602c0) (0xc000918000) Stream added, broadcasting: 5\nI0515 14:15:05.494431 2561 log.go:172] (0xc0009602c0) Reply frame received for 5\nI0515 14:15:05.535897 2561 log.go:172] (0xc0009602c0) Data frame received for 5\nI0515 14:15:05.535924 2561 log.go:172] (0xc000918000) (5) Data frame handling\nI0515 14:15:05.535944 2561 log.go:172] (0xc000918000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0515 14:15:05.567593 2561 log.go:172] (0xc0009602c0) Data frame received for 5\nI0515 14:15:05.567631 2561 log.go:172] (0xc000918000) (5) Data frame handling\nI0515 14:15:05.567653 2561 log.go:172] (0xc0009602c0) Data frame received for 3\nI0515 14:15:05.567663 2561 log.go:172] (0xc0008446e0) (3) Data frame handling\nI0515 14:15:05.567690 2561 log.go:172] (0xc0008446e0) (3) Data frame sent\nI0515 14:15:05.567712 2561 log.go:172] (0xc0009602c0) Data frame received for 3\nI0515 14:15:05.567722 2561 log.go:172] (0xc0008446e0) (3) Data frame handling\nI0515 14:15:05.568832 2561 log.go:172] (0xc0009602c0) Data frame received for 1\nI0515 14:15:05.568895 2561 log.go:172] (0xc000844640) (1) Data frame handling\nI0515 14:15:05.568911 2561 log.go:172] (0xc000844640) (1) Data frame sent\nI0515 14:15:05.568918 2561 log.go:172] (0xc0009602c0) (0xc000844640) Stream removed, broadcasting: 1\nI0515 14:15:05.568924 2561 log.go:172] (0xc0009602c0) Go away received\nI0515 14:15:05.569606 2561 log.go:172] (0xc0009602c0) (0xc000844640) Stream removed, broadcasting: 1\nI0515 14:15:05.569633 2561 log.go:172] (0xc0009602c0) (0xc0008446e0) Stream removed, broadcasting: 3\nI0515 14:15:05.569645 2561 log.go:172] (0xc0009602c0) (0xc000918000) Stream removed, broadcasting: 5\n" May 15 14:15:05.574: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 15 14:15:05.574: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 15 14:15:05.574: INFO: Waiting for statefulset status.replicas updated to 0 May 15 14:15:05.576: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 15 14:15:15.585: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 14:15:15.585: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 15 14:15:15.585: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 15 14:15:15.610: INFO: POD NODE PHASE GRACE CONDITIONS May 15 14:15:15.610: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC }] May 15 14:15:15.610: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC }] May 15 14:15:15.610: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC }] May 15 14:15:15.610: INFO: May 15 14:15:15.610: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 14:15:16.614: INFO: POD NODE PHASE GRACE CONDITIONS May 15 14:15:16.614: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC }] May 15 14:15:16.614: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC }] May 15 14:15:16.614: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC }] May 15 14:15:16.614: INFO: May 15 14:15:16.614: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 14:15:17.634: INFO: POD NODE PHASE GRACE CONDITIONS May 15 14:15:17.634: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC }] May 15 14:15:17.634: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC }] May 15 14:15:17.634: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC }] May 15 14:15:17.634: INFO: May 15 14:15:17.634: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 14:15:18.657: INFO: POD NODE PHASE GRACE CONDITIONS May 15 14:15:18.657: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC }] May 15 14:15:18.657: INFO: ss-1 iruya-worker Pending 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC }] May 15 14:15:18.657: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC }] May 15 14:15:18.657: INFO: May 15 14:15:18.657: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 14:15:19.662: INFO: POD NODE PHASE GRACE CONDITIONS May 15 14:15:19.662: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC }] May 15 14:15:19.662: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC }] May 15 14:15:19.662: INFO: May 15 14:15:19.662: INFO: StatefulSet ss has not reached scale 0, at 2 May 15 14:15:20.667: INFO: POD NODE PHASE GRACE CONDITIONS May 15 14:15:20.667: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC }] May 15 14:15:20.667: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC }] May 15 14:15:20.667: INFO: May 15 14:15:20.667: INFO: StatefulSet ss has not reached scale 0, at 2 May 15 14:15:21.671: INFO: POD NODE PHASE GRACE CONDITIONS May 15 14:15:21.671: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:30 +0000 UTC }] May 15 14:15:21.672: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 14:14:53 +0000 UTC }] May 15 14:15:21.672: INFO: May 15 14:15:21.672: INFO: StatefulSet ss has not reached scale 0, at 2 May 15 14:15:22.676: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.919568792s May 15 14:15:23.681: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.915091022s May 15 14:15:24.686: INFO: Verifying statefulset ss doesn't scale past 0 for another 909.750299ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5734 May 15 14:15:25.690: INFO: Scaling statefulset ss to 0 May 15 14:15:25.701: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 15 14:15:25.704: INFO: Deleting all statefulset in ns statefulset-5734 May 15 14:15:25.706: INFO: Scaling statefulset ss to 0 May 15 14:15:25.713: INFO: Waiting for statefulset status.replicas updated to 0 May 15 14:15:25.715: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:15:25.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5734" for this suite. May 15 14:15:31.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:15:31.886: INFO: namespace statefulset-5734 deletion completed in 6.122739891s • [SLOW TEST:61.928 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:15:31.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 15 14:15:36.000: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 15 14:15:46.087: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:15:46.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-36" for this suite. May 15 14:15:52.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:15:52.183: INFO: namespace pods-36 deletion completed in 6.072434829s • [SLOW TEST:20.297 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:15:52.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 15 14:15:52.264: INFO: Waiting up to 5m0s for pod "pod-57de3aad-0397-46d3-9091-36b8cec68c0c" in namespace "emptydir-81" to be "success or failure" May 15 14:15:52.269: INFO: Pod "pod-57de3aad-0397-46d3-9091-36b8cec68c0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.697458ms May 15 14:15:54.272: INFO: Pod "pod-57de3aad-0397-46d3-9091-36b8cec68c0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007988973s May 15 14:15:56.277: INFO: Pod "pod-57de3aad-0397-46d3-9091-36b8cec68c0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012312899s STEP: Saw pod success May 15 14:15:56.277: INFO: Pod "pod-57de3aad-0397-46d3-9091-36b8cec68c0c" satisfied condition "success or failure" May 15 14:15:56.280: INFO: Trying to get logs from node iruya-worker2 pod pod-57de3aad-0397-46d3-9091-36b8cec68c0c container test-container: STEP: delete the pod May 15 14:15:56.299: INFO: Waiting for pod pod-57de3aad-0397-46d3-9091-36b8cec68c0c to disappear May 15 14:15:56.304: INFO: Pod pod-57de3aad-0397-46d3-9091-36b8cec68c0c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:15:56.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-81" for this suite. May 15 14:16:02.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:16:02.411: INFO: namespace emptydir-81 deletion completed in 6.104003588s • [SLOW TEST:10.227 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:16:02.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 14:16:02.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7480' May 15 14:16:02.764: INFO: stderr: "" May 15 14:16:02.764: INFO: stdout: "replicationcontroller/redis-master created\n" May 15 14:16:02.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7480' May 15 14:16:03.101: INFO: stderr: "" May 15 14:16:03.101: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 15 14:16:04.106: INFO: Selector matched 1 pods for map[app:redis] May 15 14:16:04.106: INFO: Found 0 / 1 May 15 14:16:05.106: INFO: Selector matched 1 pods for map[app:redis] May 15 14:16:05.106: INFO: Found 0 / 1 May 15 14:16:06.104: INFO: Selector matched 1 pods for map[app:redis] May 15 14:16:06.104: INFO: Found 0 / 1 May 15 14:16:07.106: INFO: Selector matched 1 pods for map[app:redis] May 15 14:16:07.106: INFO: Found 1 / 1 May 15 14:16:07.106: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 15 14:16:07.109: INFO: Selector matched 1 pods for map[app:redis] May 15 14:16:07.109: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 15 14:16:07.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-7pzhz --namespace=kubectl-7480' May 15 14:16:07.227: INFO: stderr: "" May 15 14:16:07.227: INFO: stdout: "Name: redis-master-7pzhz\nNamespace: kubectl-7480\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Fri, 15 May 2020 14:16:02 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.70\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://28483544e301d2d6b3232adff1f46dd1948f198d7567fc041984e5c2b45b3c33\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 15 May 2020 14:16:06 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-qnbm9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-qnbm9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-qnbm9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-7480/redis-master-7pzhz to iruya-worker2\n Normal Pulled 3s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" May 15 14:16:07.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-7480' May 15 14:16:07.343: INFO: stderr: "" May 15 14:16:07.343: INFO: stdout: "Name: redis-master\nNamespace: kubectl-7480\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-7pzhz\n" May 15 14:16:07.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-7480' May 15 14:16:07.442: INFO: stderr: "" May 15 14:16:07.442: INFO: stdout: "Name: redis-master\nNamespace: kubectl-7480\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.99.132.181\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.70:6379\nSession Affinity: None\nEvents: \n" May 15 14:16:07.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 15 14:16:07.554: INFO: stderr: "" May 15 14:16:07.554: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 15 May 2020 14:16:03 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 15 May 2020 14:16:03 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 15 May 2020 14:16:03 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 15 May 2020 14:16:03 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 60d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 60d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 60d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 15 14:16:07.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7480' May 15 14:16:07.650: INFO: stderr: "" May 15 14:16:07.650: INFO: stdout: "Name: kubectl-7480\nLabels: e2e-framework=kubectl\n e2e-run=108568fc-bc5b-4a6d-b7a4-b391846335c1\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:16:07.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7480" for this suite. May 15 14:16:29.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:16:29.739: INFO: namespace kubectl-7480 deletion completed in 22.087337761s • [SLOW TEST:27.328 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:16:29.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9331 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 15 14:16:29.976: INFO: Found 0 stateful pods, waiting for 3 May 15 14:16:40.006: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 15 14:16:40.006: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 15 14:16:40.006: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 15 14:16:49.980: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 15 14:16:49.980: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 15 14:16:49.980: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 15 14:16:50.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9331 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 15 14:16:50.292: INFO: stderr: "I0515 14:16:50.156005 2740 log.go:172] (0xc00092c420) (0xc0001fa820) Create stream\nI0515 14:16:50.156063 2740 log.go:172] (0xc00092c420) (0xc0001fa820) Stream added, broadcasting: 1\nI0515 14:16:50.158730 2740 log.go:172] (0xc00092c420) Reply frame received for 1\nI0515 14:16:50.158793 2740 log.go:172] (0xc00092c420) (0xc000964000) Create stream\nI0515 14:16:50.158812 2740 log.go:172] (0xc00092c420) (0xc000964000) Stream added, broadcasting: 3\nI0515 14:16:50.159996 2740 log.go:172] (0xc00092c420) Reply frame received for 3\nI0515 14:16:50.160071 2740 log.go:172] (0xc00092c420) (0xc00076c000) Create stream\nI0515 14:16:50.160097 2740 log.go:172] (0xc00092c420) (0xc00076c000) Stream added, broadcasting: 5\nI0515 14:16:50.161053 2740 log.go:172] (0xc00092c420) Reply frame received for 5\nI0515 14:16:50.254184 2740 log.go:172] (0xc00092c420) Data frame received for 5\nI0515 14:16:50.254225 2740 log.go:172] (0xc00076c000) (5) Data frame handling\nI0515 14:16:50.254247 2740 log.go:172] (0xc00076c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0515 14:16:50.286309 2740 log.go:172] (0xc00092c420) Data frame received for 5\nI0515 14:16:50.286359 2740 log.go:172] (0xc00076c000) (5) Data frame handling\nI0515 14:16:50.286393 2740 log.go:172] (0xc00092c420) Data frame received for 3\nI0515 14:16:50.286408 2740 log.go:172] (0xc000964000) (3) Data frame handling\nI0515 14:16:50.286418 2740 log.go:172] (0xc000964000) (3) Data frame sent\nI0515 14:16:50.286456 2740 log.go:172] (0xc00092c420) Data frame received for 3\nI0515 14:16:50.286467 2740 log.go:172] (0xc000964000) (3) Data frame handling\nI0515 14:16:50.288138 2740 log.go:172] (0xc00092c420) Data frame received for 1\nI0515 14:16:50.288162 2740 log.go:172] (0xc0001fa820) (1) Data frame handling\nI0515 14:16:50.288177 2740 log.go:172] (0xc0001fa820) (1) Data frame sent\nI0515 14:16:50.288197 2740 log.go:172] (0xc00092c420) (0xc0001fa820) Stream removed, broadcasting: 1\nI0515 14:16:50.288262 2740 log.go:172] (0xc00092c420) Go away received\nI0515 14:16:50.288621 2740 log.go:172] (0xc00092c420) (0xc0001fa820) Stream removed, broadcasting: 1\nI0515 14:16:50.288648 2740 log.go:172] (0xc00092c420) (0xc000964000) Stream removed, broadcasting: 3\nI0515 14:16:50.288657 2740 log.go:172] (0xc00092c420) (0xc00076c000) Stream removed, broadcasting: 5\n" May 15 14:16:50.292: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 15 14:16:50.292: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 15 14:17:00.364: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 15 14:17:10.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9331 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 15 14:17:10.604: INFO: stderr: "I0515 14:17:10.520432 2760 log.go:172] (0xc0009da420) (0xc00032a820) Create stream\nI0515 14:17:10.520515 2760 log.go:172] (0xc0009da420) (0xc00032a820) Stream added, broadcasting: 1\nI0515 14:17:10.523653 2760 log.go:172] (0xc0009da420) Reply frame received for 1\nI0515 14:17:10.523725 2760 log.go:172] (0xc0009da420) (0xc00089c000) Create stream\nI0515 14:17:10.523759 2760 log.go:172] (0xc0009da420) (0xc00089c000) Stream added, broadcasting: 3\nI0515 14:17:10.525334 2760 log.go:172] (0xc0009da420) Reply frame received for 3\nI0515 14:17:10.525404 2760 log.go:172] (0xc0009da420) (0xc0008e4000) Create stream\nI0515 14:17:10.525429 2760 log.go:172] (0xc0009da420) (0xc0008e4000) Stream added, broadcasting: 5\nI0515 14:17:10.526425 2760 log.go:172] (0xc0009da420) Reply frame received for 5\nI0515 14:17:10.596826 2760 log.go:172] (0xc0009da420) Data frame received for 5\nI0515 14:17:10.596859 2760 log.go:172] (0xc0008e4000) (5) Data frame handling\nI0515 14:17:10.596868 2760 log.go:172] (0xc0008e4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0515 14:17:10.596888 2760 log.go:172] (0xc0009da420) Data frame received for 3\nI0515 14:17:10.596894 2760 log.go:172] (0xc00089c000) (3) Data frame handling\nI0515 14:17:10.596900 2760 log.go:172] (0xc00089c000) (3) Data frame sent\nI0515 14:17:10.596904 2760 log.go:172] (0xc0009da420) Data frame received for 3\nI0515 14:17:10.596910 2760 log.go:172] (0xc00089c000) (3) Data frame handling\nI0515 14:17:10.597108 2760 log.go:172] (0xc0009da420) Data frame received for 5\nI0515 14:17:10.597301 2760 log.go:172] (0xc0008e4000) (5) Data frame handling\nI0515 14:17:10.598882 2760 log.go:172] (0xc0009da420) Data frame received for 1\nI0515 14:17:10.598900 2760 log.go:172] (0xc00032a820) (1) Data frame handling\nI0515 14:17:10.598915 2760 log.go:172] (0xc00032a820) (1) Data frame sent\nI0515 14:17:10.598930 2760 log.go:172] (0xc0009da420) (0xc00032a820) Stream removed, broadcasting: 1\nI0515 14:17:10.599208 2760 log.go:172] (0xc0009da420) (0xc00032a820) Stream removed, broadcasting: 1\nI0515 14:17:10.599223 2760 log.go:172] (0xc0009da420) (0xc00089c000) Stream removed, broadcasting: 3\nI0515 14:17:10.599333 2760 log.go:172] (0xc0009da420) (0xc0008e4000) Stream removed, broadcasting: 5\n" May 15 14:17:10.605: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 15 14:17:10.605: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 15 14:17:30.626: INFO: Waiting for StatefulSet statefulset-9331/ss2 to complete update STEP: Rolling back to a previous revision May 15 14:17:40.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9331 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 15 14:17:40.894: INFO: stderr: "I0515 14:17:40.762199 2780 log.go:172] (0xc000920630) (0xc00045abe0) Create stream\nI0515 14:17:40.762265 2780 log.go:172] (0xc000920630) (0xc00045abe0) Stream added, broadcasting: 1\nI0515 14:17:40.764580 2780 log.go:172] (0xc000920630) Reply frame received for 1\nI0515 14:17:40.764645 2780 log.go:172] (0xc000920630) (0xc0007b2000) Create stream\nI0515 14:17:40.764668 2780 log.go:172] (0xc000920630) (0xc0007b2000) Stream added, broadcasting: 3\nI0515 14:17:40.766090 2780 log.go:172] (0xc000920630) Reply frame received for 3\nI0515 14:17:40.766178 2780 log.go:172] (0xc000920630) (0xc00093e000) Create stream\nI0515 14:17:40.766203 2780 log.go:172] (0xc000920630) (0xc00093e000) Stream added, broadcasting: 5\nI0515 14:17:40.767251 2780 log.go:172] (0xc000920630) Reply frame received for 5\nI0515 14:17:40.848669 2780 log.go:172] (0xc000920630) Data frame received for 5\nI0515 14:17:40.848688 2780 log.go:172] (0xc00093e000) (5) Data frame handling\nI0515 14:17:40.848700 2780 log.go:172] (0xc00093e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0515 14:17:40.887526 2780 log.go:172] (0xc000920630) Data frame received for 3\nI0515 14:17:40.887573 2780 log.go:172] (0xc0007b2000) (3) Data frame handling\nI0515 14:17:40.887604 2780 log.go:172] (0xc0007b2000) (3) Data frame sent\nI0515 14:17:40.887624 2780 log.go:172] (0xc000920630) Data frame received for 3\nI0515 14:17:40.887637 2780 log.go:172] (0xc0007b2000) (3) Data frame handling\nI0515 14:17:40.887681 2780 log.go:172] (0xc000920630) Data frame received for 5\nI0515 14:17:40.887703 2780 log.go:172] (0xc00093e000) (5) Data frame handling\nI0515 14:17:40.889408 2780 log.go:172] (0xc000920630) Data frame received for 1\nI0515 14:17:40.889438 2780 log.go:172] (0xc00045abe0) (1) Data frame handling\nI0515 14:17:40.889452 2780 log.go:172] (0xc00045abe0) (1) Data frame sent\nI0515 14:17:40.890256 2780 log.go:172] (0xc000920630) (0xc00045abe0) Stream removed, broadcasting: 1\nI0515 14:17:40.890389 2780 log.go:172] (0xc000920630) Go away received\nI0515 14:17:40.891451 2780 log.go:172] (0xc000920630) (0xc00045abe0) Stream removed, broadcasting: 1\nI0515 14:17:40.891465 2780 log.go:172] (0xc000920630) (0xc0007b2000) Stream removed, broadcasting: 3\nI0515 14:17:40.891472 2780 log.go:172] (0xc000920630) (0xc00093e000) Stream removed, broadcasting: 5\n" May 15 14:17:40.894: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 15 14:17:40.894: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 15 14:17:50.928: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 15 14:18:00.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9331 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 15 14:18:01.218: INFO: stderr: "I0515 14:18:01.119362 2802 log.go:172] (0xc000a62420) (0xc0005006e0) Create stream\nI0515 14:18:01.119445 2802 log.go:172] (0xc000a62420) (0xc0005006e0) Stream added, broadcasting: 1\nI0515 14:18:01.123586 2802 log.go:172] (0xc000a62420) Reply frame received for 1\nI0515 14:18:01.123629 2802 log.go:172] (0xc000a62420) (0xc000500000) Create stream\nI0515 14:18:01.123642 2802 log.go:172] (0xc000a62420) (0xc000500000) Stream added, broadcasting: 3\nI0515 14:18:01.124704 2802 log.go:172] (0xc000a62420) Reply frame received for 3\nI0515 14:18:01.124775 2802 log.go:172] (0xc000a62420) (0xc0005b83c0) Create stream\nI0515 14:18:01.124800 2802 log.go:172] (0xc000a62420) (0xc0005b83c0) Stream added, broadcasting: 5\nI0515 14:18:01.125823 2802 log.go:172] (0xc000a62420) Reply frame received for 5\nI0515 14:18:01.212074 2802 log.go:172] (0xc000a62420) Data frame received for 3\nI0515 14:18:01.212123 2802 log.go:172] (0xc000500000) (3) Data frame handling\nI0515 14:18:01.212146 2802 log.go:172] (0xc000500000) (3) Data frame sent\nI0515 14:18:01.212211 2802 log.go:172] (0xc000a62420) Data frame received for 3\nI0515 14:18:01.212225 2802 log.go:172] (0xc000500000) (3) Data frame handling\nI0515 14:18:01.212253 2802 log.go:172] (0xc000a62420) Data frame received for 5\nI0515 14:18:01.212269 2802 log.go:172] (0xc0005b83c0) (5) Data frame handling\nI0515 14:18:01.212287 2802 log.go:172] (0xc0005b83c0) (5) Data frame sent\nI0515 14:18:01.212311 2802 log.go:172] (0xc000a62420) Data frame received for 5\nI0515 14:18:01.212327 2802 log.go:172] (0xc0005b83c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0515 14:18:01.214043 2802 log.go:172] (0xc000a62420) Data frame received for 1\nI0515 14:18:01.214074 2802 log.go:172] (0xc0005006e0) (1) Data frame handling\nI0515 14:18:01.214089 2802 log.go:172] (0xc0005006e0) (1) Data frame sent\nI0515 14:18:01.214107 2802 log.go:172] (0xc000a62420) (0xc0005006e0) Stream removed, broadcasting: 1\nI0515 14:18:01.214127 2802 log.go:172] (0xc000a62420) Go away received\nI0515 14:18:01.214550 2802 log.go:172] (0xc000a62420) (0xc0005006e0) Stream removed, broadcasting: 1\nI0515 14:18:01.214588 2802 log.go:172] (0xc000a62420) (0xc000500000) Stream removed, broadcasting: 3\nI0515 14:18:01.214610 2802 log.go:172] (0xc000a62420) (0xc0005b83c0) Stream removed, broadcasting: 5\n" May 15 14:18:01.218: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 15 14:18:01.218: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 15 14:18:21.715: INFO: Waiting for StatefulSet statefulset-9331/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 15 14:18:31.722: INFO: Deleting all statefulset in ns statefulset-9331 May 15 14:18:31.725: INFO: Scaling statefulset ss2 to 0 May 15 14:18:51.744: INFO: Waiting for statefulset status.replicas updated to 0 May 15 14:18:51.748: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:18:51.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9331" for this suite. May 15 14:18:57.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:18:57.962: INFO: namespace statefulset-9331 deletion completed in 6.176525134s • [SLOW TEST:148.223 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:18:57.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2539.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2539.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2539.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2539.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2539.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2539.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 14:19:04.252: INFO: DNS probes using dns-2539/dns-test-c124716f-00ae-4137-983d-f723e39aaa2e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:19:04.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2539" for this suite. May 15 14:19:10.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:19:10.446: INFO: namespace dns-2539 deletion completed in 6.153677166s • [SLOW TEST:12.483 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:19:10.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5670/configmap-test-f7ce6479-aea3-48b6-b50b-eed2942b77c3 STEP: Creating a pod to test consume configMaps May 15 14:19:10.514: INFO: Waiting up to 5m0s for pod "pod-configmaps-8556cebd-cb84-4b5b-9cca-c6fd18bb266f" in namespace "configmap-5670" to be "success or failure" May 15 14:19:10.518: INFO: Pod "pod-configmaps-8556cebd-cb84-4b5b-9cca-c6fd18bb266f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.757916ms May 15 14:19:12.522: INFO: Pod "pod-configmaps-8556cebd-cb84-4b5b-9cca-c6fd18bb266f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007396135s May 15 14:19:14.526: INFO: Pod "pod-configmaps-8556cebd-cb84-4b5b-9cca-c6fd18bb266f": Phase="Running", Reason="", readiness=true. Elapsed: 4.011277242s May 15 14:19:16.530: INFO: Pod "pod-configmaps-8556cebd-cb84-4b5b-9cca-c6fd18bb266f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015853992s STEP: Saw pod success May 15 14:19:16.530: INFO: Pod "pod-configmaps-8556cebd-cb84-4b5b-9cca-c6fd18bb266f" satisfied condition "success or failure" May 15 14:19:16.534: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-8556cebd-cb84-4b5b-9cca-c6fd18bb266f container env-test: STEP: delete the pod May 15 14:19:16.561: INFO: Waiting for pod pod-configmaps-8556cebd-cb84-4b5b-9cca-c6fd18bb266f to disappear May 15 14:19:16.580: INFO: Pod pod-configmaps-8556cebd-cb84-4b5b-9cca-c6fd18bb266f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:19:16.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5670" for this suite. May 15 14:19:22.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:19:22.715: INFO: namespace configmap-5670 deletion completed in 6.108571034s • [SLOW TEST:12.269 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:19:22.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 15 14:19:27.314: INFO: Successfully updated pod "annotationupdate5810994b-8b09-43d3-918c-d3871972ebd4" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:19:29.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8610" for this suite. May 15 14:19:49.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:19:49.425: INFO: namespace downward-api-8610 deletion completed in 20.077018513s • [SLOW TEST:26.709 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:19:49.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 15 14:19:49.508: INFO: Pod name pod-release: Found 0 pods out of 1 May 15 14:19:54.514: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:19:55.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4096" for this suite. May 15 14:20:01.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:20:01.661: INFO: namespace replication-controller-4096 deletion completed in 6.076796335s • [SLOW TEST:12.236 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:20:01.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 14:20:06.511: INFO: Waiting up to 5m0s for pod "client-envvars-708d898e-3cec-4b69-be15-13ee4f018967" in namespace "pods-1920" to be "success or failure" May 15 14:20:06.514: INFO: Pod "client-envvars-708d898e-3cec-4b69-be15-13ee4f018967": Phase="Pending", Reason="", readiness=false. Elapsed: 2.810293ms May 15 14:20:08.566: INFO: Pod "client-envvars-708d898e-3cec-4b69-be15-13ee4f018967": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054269927s May 15 14:20:10.571: INFO: Pod "client-envvars-708d898e-3cec-4b69-be15-13ee4f018967": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059979206s STEP: Saw pod success May 15 14:20:10.571: INFO: Pod "client-envvars-708d898e-3cec-4b69-be15-13ee4f018967" satisfied condition "success or failure" May 15 14:20:10.574: INFO: Trying to get logs from node iruya-worker pod client-envvars-708d898e-3cec-4b69-be15-13ee4f018967 container env3cont: STEP: delete the pod May 15 14:20:10.611: INFO: Waiting for pod client-envvars-708d898e-3cec-4b69-be15-13ee4f018967 to disappear May 15 14:20:10.627: INFO: Pod client-envvars-708d898e-3cec-4b69-be15-13ee4f018967 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:20:10.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1920" for this suite. May 15 14:20:52.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:20:52.755: INFO: namespace pods-1920 deletion completed in 42.123944585s • [SLOW TEST:51.094 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:20:52.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0515 14:21:23.329871 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 14:21:23.329: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:21:23.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3731" for this suite. May 15 14:21:31.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:21:31.447: INFO: namespace gc-3731 deletion completed in 8.113958616s • [SLOW TEST:38.691 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:21:31.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-add2158f-b6d3-42ba-8976-12d202b9d186 STEP: Creating secret with name s-test-opt-upd-9f52095c-c67b-4453-8b51-9e62627f41a3 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-add2158f-b6d3-42ba-8976-12d202b9d186 STEP: Updating secret s-test-opt-upd-9f52095c-c67b-4453-8b51-9e62627f41a3 STEP: Creating secret with name s-test-opt-create-7fccf351-bb9f-43c5-a395-c3928921d85f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:22:46.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3616" for this suite. May 15 14:23:10.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:23:10.163: INFO: namespace secrets-3616 deletion completed in 24.087925311s • [SLOW TEST:98.716 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:23:10.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 15 14:23:10.212: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:23:19.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6715" for this suite. May 15 14:23:41.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:23:41.480: INFO: namespace init-container-6715 deletion completed in 22.094733207s • [SLOW TEST:31.317 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:23:41.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 14:23:41.578: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a71479f2-af25-43ee-9dd8-dab917df99aa" in namespace "downward-api-2308" to be "success or failure" May 15 14:23:41.581: INFO: Pod "downwardapi-volume-a71479f2-af25-43ee-9dd8-dab917df99aa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.2251ms May 15 14:23:43.585: INFO: Pod "downwardapi-volume-a71479f2-af25-43ee-9dd8-dab917df99aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007657641s May 15 14:23:45.590: INFO: Pod "downwardapi-volume-a71479f2-af25-43ee-9dd8-dab917df99aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012000344s STEP: Saw pod success May 15 14:23:45.590: INFO: Pod "downwardapi-volume-a71479f2-af25-43ee-9dd8-dab917df99aa" satisfied condition "success or failure" May 15 14:23:45.592: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a71479f2-af25-43ee-9dd8-dab917df99aa container client-container: STEP: delete the pod May 15 14:23:45.620: INFO: Waiting for pod downwardapi-volume-a71479f2-af25-43ee-9dd8-dab917df99aa to disappear May 15 14:23:45.624: INFO: Pod downwardapi-volume-a71479f2-af25-43ee-9dd8-dab917df99aa no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:23:45.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2308" for this suite. May 15 14:23:51.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:23:51.719: INFO: namespace downward-api-2308 deletion completed in 6.091244486s • [SLOW TEST:10.238 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:23:51.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 15 14:23:51.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8179' May 15 14:23:51.859: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 15 14:23:51.859: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 15 14:23:53.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8179' May 15 14:23:54.041: INFO: stderr: "" May 15 14:23:54.042: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:23:54.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8179" for this suite. May 15 14:25:52.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:25:52.346: INFO: namespace kubectl-8179 deletion completed in 1m58.165871069s • [SLOW TEST:120.628 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:25:52.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 15 14:25:52.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 15 14:25:55.095: INFO: stderr: "" May 15 14:25:55.095: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:25:55.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1825" for this suite. May 15 14:26:01.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:26:01.202: INFO: namespace kubectl-1825 deletion completed in 6.103289903s • [SLOW TEST:8.855 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:26:01.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-b36f7b4f-f78b-4527-8ddd-5bdd000f6ec1 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:26:07.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2129" for this suite. May 15 14:26:29.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:26:29.480: INFO: namespace configmap-2129 deletion completed in 22.146349579s • [SLOW TEST:28.278 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:26:29.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 15 14:26:33.617: INFO: Pod pod-hostip-09cdec78-9e14-41ae-9db5-4cabb8b9785c has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:26:33.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9393" for this suite. May 15 14:26:55.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:26:55.710: INFO: namespace pods-9393 deletion completed in 22.089312883s • [SLOW TEST:26.230 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:26:55.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-77jp STEP: Creating a pod to test atomic-volume-subpath May 15 14:26:55.798: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-77jp" in namespace "subpath-7707" to be "success or failure" May 15 14:26:55.802: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235834ms May 15 14:26:57.806: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008017191s May 15 14:26:59.811: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Running", Reason="", readiness=true. Elapsed: 4.012556482s May 15 14:27:01.815: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Running", Reason="", readiness=true. Elapsed: 6.017027632s May 15 14:27:03.819: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Running", Reason="", readiness=true. Elapsed: 8.020850491s May 15 14:27:05.823: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Running", Reason="", readiness=true. Elapsed: 10.024981181s May 15 14:27:07.828: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Running", Reason="", readiness=true. Elapsed: 12.029625945s May 15 14:27:09.833: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Running", Reason="", readiness=true. Elapsed: 14.034368315s May 15 14:27:11.837: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Running", Reason="", readiness=true. Elapsed: 16.038641214s May 15 14:27:13.842: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Running", Reason="", readiness=true. Elapsed: 18.043383383s May 15 14:27:15.846: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Running", Reason="", readiness=true. Elapsed: 20.047305266s May 15 14:27:17.849: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Running", Reason="", readiness=true. Elapsed: 22.050300936s May 15 14:27:19.871: INFO: Pod "pod-subpath-test-configmap-77jp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.072328383s STEP: Saw pod success May 15 14:27:19.871: INFO: Pod "pod-subpath-test-configmap-77jp" satisfied condition "success or failure" May 15 14:27:19.873: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-77jp container test-container-subpath-configmap-77jp: STEP: delete the pod May 15 14:27:19.911: INFO: Waiting for pod pod-subpath-test-configmap-77jp to disappear May 15 14:27:19.932: INFO: Pod pod-subpath-test-configmap-77jp no longer exists STEP: Deleting pod pod-subpath-test-configmap-77jp May 15 14:27:19.932: INFO: Deleting pod "pod-subpath-test-configmap-77jp" in namespace "subpath-7707" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:27:19.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7707" for this suite. May 15 14:27:25.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:27:26.032: INFO: namespace subpath-7707 deletion completed in 6.092544857s • [SLOW TEST:30.322 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:27:26.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 15 14:27:34.167: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 14:27:34.171: INFO: Pod pod-with-poststart-http-hook still exists May 15 14:27:36.171: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 14:27:36.176: INFO: Pod pod-with-poststart-http-hook still exists May 15 14:27:38.171: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 14:27:38.176: INFO: Pod pod-with-poststart-http-hook still exists May 15 14:27:40.171: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 14:27:40.176: INFO: Pod pod-with-poststart-http-hook still exists May 15 14:27:42.171: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 14:27:42.175: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:27:42.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1255" for this suite. May 15 14:28:04.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:28:04.262: INFO: namespace container-lifecycle-hook-1255 deletion completed in 22.083250521s • [SLOW TEST:38.229 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:28:04.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 14:28:04.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0c9f7ab-d1f4-41a8-abb0-0f0bacd0c112" in namespace "projected-6664" to be "success or failure" May 15 14:28:04.326: INFO: Pod "downwardapi-volume-e0c9f7ab-d1f4-41a8-abb0-0f0bacd0c112": Phase="Pending", Reason="", readiness=false. Elapsed: 3.861539ms May 15 14:28:06.330: INFO: Pod "downwardapi-volume-e0c9f7ab-d1f4-41a8-abb0-0f0bacd0c112": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008411165s May 15 14:28:08.334: INFO: Pod "downwardapi-volume-e0c9f7ab-d1f4-41a8-abb0-0f0bacd0c112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012652907s STEP: Saw pod success May 15 14:28:08.335: INFO: Pod "downwardapi-volume-e0c9f7ab-d1f4-41a8-abb0-0f0bacd0c112" satisfied condition "success or failure" May 15 14:28:08.338: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e0c9f7ab-d1f4-41a8-abb0-0f0bacd0c112 container client-container: STEP: delete the pod May 15 14:28:08.351: INFO: Waiting for pod downwardapi-volume-e0c9f7ab-d1f4-41a8-abb0-0f0bacd0c112 to disappear May 15 14:28:08.366: INFO: Pod downwardapi-volume-e0c9f7ab-d1f4-41a8-abb0-0f0bacd0c112 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:28:08.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6664" for this suite. May 15 14:28:14.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:28:14.458: INFO: namespace projected-6664 deletion completed in 6.0889817s • [SLOW TEST:10.196 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:28:14.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:28:22.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-942" for this suite. May 15 14:28:28.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:28:28.694: INFO: namespace kubelet-test-942 deletion completed in 6.141045107s • [SLOW TEST:14.236 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:28:28.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 14:28:28.752: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31e7f173-71d2-48f0-b816-b9360f2b5c77" in namespace "projected-2169" to be "success or failure" May 15 14:28:28.757: INFO: Pod "downwardapi-volume-31e7f173-71d2-48f0-b816-b9360f2b5c77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.604033ms May 15 14:28:30.762: INFO: Pod "downwardapi-volume-31e7f173-71d2-48f0-b816-b9360f2b5c77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009513055s May 15 14:28:32.766: INFO: Pod "downwardapi-volume-31e7f173-71d2-48f0-b816-b9360f2b5c77": Phase="Running", Reason="", readiness=true. Elapsed: 4.013099089s May 15 14:28:34.769: INFO: Pod "downwardapi-volume-31e7f173-71d2-48f0-b816-b9360f2b5c77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016712741s STEP: Saw pod success May 15 14:28:34.769: INFO: Pod "downwardapi-volume-31e7f173-71d2-48f0-b816-b9360f2b5c77" satisfied condition "success or failure" May 15 14:28:34.771: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-31e7f173-71d2-48f0-b816-b9360f2b5c77 container client-container: STEP: delete the pod May 15 14:28:34.813: INFO: Waiting for pod downwardapi-volume-31e7f173-71d2-48f0-b816-b9360f2b5c77 to disappear May 15 14:28:34.937: INFO: Pod downwardapi-volume-31e7f173-71d2-48f0-b816-b9360f2b5c77 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:28:34.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2169" for this suite. May 15 14:28:40.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:28:41.063: INFO: namespace projected-2169 deletion completed in 6.122315016s • [SLOW TEST:12.368 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:28:41.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-bbj6 STEP: Creating a pod to test atomic-volume-subpath May 15 14:28:41.163: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-bbj6" in namespace "subpath-7648" to be "success or failure" May 15 14:28:41.167: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292379ms May 15 14:28:43.171: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008740943s May 15 14:28:45.175: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Running", Reason="", readiness=true. Elapsed: 4.012724205s May 15 14:28:47.179: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Running", Reason="", readiness=true. Elapsed: 6.015970375s May 15 14:28:49.183: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Running", Reason="", readiness=true. Elapsed: 8.02033162s May 15 14:28:51.188: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Running", Reason="", readiness=true. Elapsed: 10.024968542s May 15 14:28:53.192: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Running", Reason="", readiness=true. Elapsed: 12.029362425s May 15 14:28:55.196: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Running", Reason="", readiness=true. Elapsed: 14.033784082s May 15 14:28:57.200: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Running", Reason="", readiness=true. Elapsed: 16.0370869s May 15 14:28:59.217: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Running", Reason="", readiness=true. Elapsed: 18.053986088s May 15 14:29:01.224: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Running", Reason="", readiness=true. Elapsed: 20.061216013s May 15 14:29:03.228: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Running", Reason="", readiness=true. Elapsed: 22.064948136s May 15 14:29:05.231: INFO: Pod "pod-subpath-test-secret-bbj6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068824187s STEP: Saw pod success May 15 14:29:05.231: INFO: Pod "pod-subpath-test-secret-bbj6" satisfied condition "success or failure" May 15 14:29:05.234: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-bbj6 container test-container-subpath-secret-bbj6: STEP: delete the pod May 15 14:29:05.343: INFO: Waiting for pod pod-subpath-test-secret-bbj6 to disappear May 15 14:29:05.360: INFO: Pod pod-subpath-test-secret-bbj6 no longer exists STEP: Deleting pod pod-subpath-test-secret-bbj6 May 15 14:29:05.360: INFO: Deleting pod "pod-subpath-test-secret-bbj6" in namespace "subpath-7648" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:29:05.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7648" for this suite. May 15 14:29:11.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:29:11.439: INFO: namespace subpath-7648 deletion completed in 6.073595453s • [SLOW TEST:30.375 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:29:11.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f7c72091-0b72-436f-ae2f-8d60e0ae5757 STEP: Creating a pod to test consume configMaps May 15 14:29:11.723: INFO: Waiting up to 5m0s for pod "pod-configmaps-32377a93-6fda-49a6-bba7-5e418591d9c0" in namespace "configmap-4336" to be "success or failure" May 15 14:29:11.940: INFO: Pod "pod-configmaps-32377a93-6fda-49a6-bba7-5e418591d9c0": Phase="Pending", Reason="", readiness=false. Elapsed: 216.906528ms May 15 14:29:13.944: INFO: Pod "pod-configmaps-32377a93-6fda-49a6-bba7-5e418591d9c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220623685s May 15 14:29:15.948: INFO: Pod "pod-configmaps-32377a93-6fda-49a6-bba7-5e418591d9c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.224643441s STEP: Saw pod success May 15 14:29:15.948: INFO: Pod "pod-configmaps-32377a93-6fda-49a6-bba7-5e418591d9c0" satisfied condition "success or failure" May 15 14:29:15.951: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-32377a93-6fda-49a6-bba7-5e418591d9c0 container configmap-volume-test: STEP: delete the pod May 15 14:29:15.972: INFO: Waiting for pod pod-configmaps-32377a93-6fda-49a6-bba7-5e418591d9c0 to disappear May 15 14:29:16.044: INFO: Pod pod-configmaps-32377a93-6fda-49a6-bba7-5e418591d9c0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:29:16.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4336" for this suite. May 15 14:29:22.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:29:22.180: INFO: namespace configmap-4336 deletion completed in 6.131395815s • [SLOW TEST:10.741 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:29:22.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 15 14:29:22.247: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 14:29:22.296: INFO: Waiting for terminating namespaces to be deleted... May 15 14:29:22.299: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 15 14:29:22.302: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 15 14:29:22.302: INFO: Container kube-proxy ready: true, restart count 0 May 15 14:29:22.302: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 15 14:29:22.302: INFO: Container kindnet-cni ready: true, restart count 0 May 15 14:29:22.302: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 15 14:29:22.306: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 15 14:29:22.306: INFO: Container coredns ready: true, restart count 0 May 15 14:29:22.306: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 15 14:29:22.306: INFO: Container coredns ready: true, restart count 0 May 15 14:29:22.306: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 15 14:29:22.306: INFO: Container kube-proxy ready: true, restart count 0 May 15 14:29:22.306: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 15 14:29:22.306: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160f39ff569a8b7b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:29:23.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2971" for this suite. May 15 14:29:29.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:29:29.412: INFO: namespace sched-pred-2971 deletion completed in 6.087638647s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.232 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:29:29.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 15 14:29:29.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:29:29.504: INFO: Number of nodes with available pods: 0 May 15 14:29:29.505: INFO: Node iruya-worker is running more than one daemon pod May 15 14:29:30.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:29:30.515: INFO: Number of nodes with available pods: 0 May 15 14:29:30.515: INFO: Node iruya-worker is running more than one daemon pod May 15 14:29:31.509: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:29:31.512: INFO: Number of nodes with available pods: 0 May 15 14:29:31.512: INFO: Node iruya-worker is running more than one daemon pod May 15 14:29:32.510: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:29:32.513: INFO: Number of nodes with available pods: 0 May 15 14:29:32.513: INFO: Node iruya-worker is running more than one daemon pod May 15 14:29:33.541: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:29:33.548: INFO: Number of nodes with available pods: 0 May 15 14:29:33.548: INFO: Node iruya-worker is running more than one daemon pod May 15 14:29:34.508: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:29:34.511: INFO: Number of nodes with available pods: 2 May 15 14:29:34.511: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 15 14:29:34.563: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:29:34.578: INFO: Number of nodes with available pods: 2 May 15 14:29:34.578: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5510, will wait for the garbage collector to delete the pods May 15 14:29:35.699: INFO: Deleting DaemonSet.extensions daemon-set took: 5.195709ms May 15 14:29:35.999: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.249589ms May 15 14:29:42.203: INFO: Number of nodes with available pods: 0 May 15 14:29:42.203: INFO: Number of running nodes: 0, number of available pods: 0 May 15 14:29:42.206: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5510/daemonsets","resourceVersion":"11051618"},"items":null} May 15 14:29:42.208: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5510/pods","resourceVersion":"11051618"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:29:42.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5510" for this suite. May 15 14:29:48.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:29:48.290: INFO: namespace daemonsets-5510 deletion completed in 6.071975971s • [SLOW TEST:18.878 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:29:48.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:30:48.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6389" for this suite. May 15 14:31:10.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:31:10.474: INFO: namespace container-probe-6389 deletion completed in 22.089462335s • [SLOW TEST:82.183 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:31:10.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 14:31:10.579: INFO: Creating ReplicaSet my-hostname-basic-7a0dd571-3106-48a7-bf50-291efb044264 May 15 14:31:10.602: INFO: Pod name my-hostname-basic-7a0dd571-3106-48a7-bf50-291efb044264: Found 0 pods out of 1 May 15 14:31:15.606: INFO: Pod name my-hostname-basic-7a0dd571-3106-48a7-bf50-291efb044264: Found 1 pods out of 1 May 15 14:31:15.606: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7a0dd571-3106-48a7-bf50-291efb044264" is running May 15 14:31:15.609: INFO: Pod "my-hostname-basic-7a0dd571-3106-48a7-bf50-291efb044264-n26mm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 14:31:10 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 14:31:14 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 14:31:14 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 14:31:10 +0000 UTC Reason: Message:}]) May 15 14:31:15.609: INFO: Trying to dial the pod May 15 14:31:20.621: INFO: Controller my-hostname-basic-7a0dd571-3106-48a7-bf50-291efb044264: Got expected result from replica 1 [my-hostname-basic-7a0dd571-3106-48a7-bf50-291efb044264-n26mm]: "my-hostname-basic-7a0dd571-3106-48a7-bf50-291efb044264-n26mm", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:31:20.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1083" for this suite. May 15 14:31:26.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:31:26.711: INFO: namespace replicaset-1083 deletion completed in 6.086365936s • [SLOW TEST:16.237 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:31:26.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 15 14:31:26.774: INFO: Waiting up to 5m0s for pod "downward-api-25f2aa73-d169-4f96-91f4-b3ef88705a7e" in namespace "downward-api-3434" to be "success or failure" May 15 14:31:26.778: INFO: Pod "downward-api-25f2aa73-d169-4f96-91f4-b3ef88705a7e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.824763ms May 15 14:31:28.781: INFO: Pod "downward-api-25f2aa73-d169-4f96-91f4-b3ef88705a7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007358939s May 15 14:31:30.819: INFO: Pod "downward-api-25f2aa73-d169-4f96-91f4-b3ef88705a7e": Phase="Running", Reason="", readiness=true. Elapsed: 4.044573021s May 15 14:31:32.823: INFO: Pod "downward-api-25f2aa73-d169-4f96-91f4-b3ef88705a7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04931717s STEP: Saw pod success May 15 14:31:32.823: INFO: Pod "downward-api-25f2aa73-d169-4f96-91f4-b3ef88705a7e" satisfied condition "success or failure" May 15 14:31:32.827: INFO: Trying to get logs from node iruya-worker pod downward-api-25f2aa73-d169-4f96-91f4-b3ef88705a7e container dapi-container: STEP: delete the pod May 15 14:31:32.858: INFO: Waiting for pod downward-api-25f2aa73-d169-4f96-91f4-b3ef88705a7e to disappear May 15 14:31:32.868: INFO: Pod downward-api-25f2aa73-d169-4f96-91f4-b3ef88705a7e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:31:32.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3434" for this suite. May 15 14:31:38.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:31:38.957: INFO: namespace downward-api-3434 deletion completed in 6.085473667s • [SLOW TEST:12.245 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:31:38.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:31:45.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5765" for this suite. May 15 14:31:51.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:31:51.428: INFO: namespace namespaces-5765 deletion completed in 6.081287368s STEP: Destroying namespace "nsdeletetest-5157" for this suite. May 15 14:31:51.430: INFO: Namespace nsdeletetest-5157 was already deleted STEP: Destroying namespace "nsdeletetest-6244" for this suite. May 15 14:31:57.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:31:57.529: INFO: namespace nsdeletetest-6244 deletion completed in 6.098588633s • [SLOW TEST:18.572 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:31:57.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 15 14:31:57.622: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1586,SelfLink:/api/v1/namespaces/watch-1586/configmaps/e2e-watch-test-resource-version,UID:c7eba6a8-8eab-4be8-965a-d6e142bed6e4,ResourceVersion:11052028,Generation:0,CreationTimestamp:2020-05-15 14:31:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 15 14:31:57.622: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1586,SelfLink:/api/v1/namespaces/watch-1586/configmaps/e2e-watch-test-resource-version,UID:c7eba6a8-8eab-4be8-965a-d6e142bed6e4,ResourceVersion:11052029,Generation:0,CreationTimestamp:2020-05-15 14:31:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:31:57.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1586" for this suite. May 15 14:32:03.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:32:03.758: INFO: namespace watch-1586 deletion completed in 6.116573617s • [SLOW TEST:6.229 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:32:03.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 15 14:32:03.826: INFO: Waiting up to 5m0s for pod "pod-d8ba28c9-17dc-4703-b60b-bf4fd5cae0b9" in namespace "emptydir-5046" to be "success or failure" May 15 14:32:03.829: INFO: Pod "pod-d8ba28c9-17dc-4703-b60b-bf4fd5cae0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.533468ms May 15 14:32:05.833: INFO: Pod "pod-d8ba28c9-17dc-4703-b60b-bf4fd5cae0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006462161s May 15 14:32:08.060: INFO: Pod "pod-d8ba28c9-17dc-4703-b60b-bf4fd5cae0b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.233833755s STEP: Saw pod success May 15 14:32:08.060: INFO: Pod "pod-d8ba28c9-17dc-4703-b60b-bf4fd5cae0b9" satisfied condition "success or failure" May 15 14:32:08.132: INFO: Trying to get logs from node iruya-worker pod pod-d8ba28c9-17dc-4703-b60b-bf4fd5cae0b9 container test-container: STEP: delete the pod May 15 14:32:08.351: INFO: Waiting for pod pod-d8ba28c9-17dc-4703-b60b-bf4fd5cae0b9 to disappear May 15 14:32:08.422: INFO: Pod pod-d8ba28c9-17dc-4703-b60b-bf4fd5cae0b9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:32:08.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5046" for this suite. May 15 14:32:14.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:32:14.594: INFO: namespace emptydir-5046 deletion completed in 6.168847082s • [SLOW TEST:10.836 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:32:14.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:32:18.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-582" for this suite. May 15 14:33:02.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:33:02.834: INFO: namespace kubelet-test-582 deletion completed in 44.126797806s • [SLOW TEST:48.240 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:33:02.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-a89c1e55-414c-4961-8486-46f7f5b28989 STEP: Creating a pod to test consume secrets May 15 14:33:02.925: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ab462268-0b6a-4fdc-bbc9-d2cae743a8f0" in namespace "projected-2537" to be "success or failure" May 15 14:33:02.945: INFO: Pod "pod-projected-secrets-ab462268-0b6a-4fdc-bbc9-d2cae743a8f0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.853953ms May 15 14:33:05.009: INFO: Pod "pod-projected-secrets-ab462268-0b6a-4fdc-bbc9-d2cae743a8f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083979822s May 15 14:33:07.013: INFO: Pod "pod-projected-secrets-ab462268-0b6a-4fdc-bbc9-d2cae743a8f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087909627s STEP: Saw pod success May 15 14:33:07.013: INFO: Pod "pod-projected-secrets-ab462268-0b6a-4fdc-bbc9-d2cae743a8f0" satisfied condition "success or failure" May 15 14:33:07.016: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-ab462268-0b6a-4fdc-bbc9-d2cae743a8f0 container secret-volume-test: STEP: delete the pod May 15 14:33:07.049: INFO: Waiting for pod pod-projected-secrets-ab462268-0b6a-4fdc-bbc9-d2cae743a8f0 to disappear May 15 14:33:07.054: INFO: Pod pod-projected-secrets-ab462268-0b6a-4fdc-bbc9-d2cae743a8f0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:33:07.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2537" for this suite. May 15 14:33:13.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:33:13.147: INFO: namespace projected-2537 deletion completed in 6.090666921s • [SLOW TEST:10.313 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:33:13.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-af52046b-f7ca-4105-91d1-f678721a3cd5 STEP: Creating a pod to test consume configMaps May 15 14:33:13.243: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5b9650f-5a53-4012-b4dc-5f5a5c76dee0" in namespace "configmap-7023" to be "success or failure" May 15 14:33:13.246: INFO: Pod "pod-configmaps-e5b9650f-5a53-4012-b4dc-5f5a5c76dee0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.309602ms May 15 14:33:15.250: INFO: Pod "pod-configmaps-e5b9650f-5a53-4012-b4dc-5f5a5c76dee0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00688974s May 15 14:33:17.254: INFO: Pod "pod-configmaps-e5b9650f-5a53-4012-b4dc-5f5a5c76dee0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010968943s STEP: Saw pod success May 15 14:33:17.254: INFO: Pod "pod-configmaps-e5b9650f-5a53-4012-b4dc-5f5a5c76dee0" satisfied condition "success or failure" May 15 14:33:17.257: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e5b9650f-5a53-4012-b4dc-5f5a5c76dee0 container configmap-volume-test: STEP: delete the pod May 15 14:33:17.330: INFO: Waiting for pod pod-configmaps-e5b9650f-5a53-4012-b4dc-5f5a5c76dee0 to disappear May 15 14:33:17.390: INFO: Pod pod-configmaps-e5b9650f-5a53-4012-b4dc-5f5a5c76dee0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:33:17.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7023" for this suite. May 15 14:33:23.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:33:23.628: INFO: namespace configmap-7023 deletion completed in 6.233459477s • [SLOW TEST:10.480 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:33:23.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 15 14:33:23.741: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 15 14:33:23.750: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:23.755: INFO: Number of nodes with available pods: 0 May 15 14:33:23.755: INFO: Node iruya-worker is running more than one daemon pod May 15 14:33:24.760: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:24.765: INFO: Number of nodes with available pods: 0 May 15 14:33:24.765: INFO: Node iruya-worker is running more than one daemon pod May 15 14:33:25.760: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:25.764: INFO: Number of nodes with available pods: 0 May 15 14:33:25.764: INFO: Node iruya-worker is running more than one daemon pod May 15 14:33:26.856: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:26.859: INFO: Number of nodes with available pods: 0 May 15 14:33:26.859: INFO: Node iruya-worker is running more than one daemon pod May 15 14:33:27.766: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:27.768: INFO: Number of nodes with available pods: 0 May 15 14:33:27.768: INFO: Node iruya-worker is running more than one daemon pod May 15 14:33:28.791: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:28.794: INFO: Number of nodes with available pods: 2 May 15 14:33:28.794: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 15 14:33:28.822: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:28.822: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:28.830: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:29.835: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:29.835: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:29.839: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:30.844: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:30.844: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:30.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:31.835: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:31.835: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:31.839: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:32.833: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:32.833: INFO: Pod daemon-set-b2v6q is not available May 15 14:33:32.833: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:32.836: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:33.834: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:33.834: INFO: Pod daemon-set-b2v6q is not available May 15 14:33:33.834: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:33.837: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:34.910: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:34.910: INFO: Pod daemon-set-b2v6q is not available May 15 14:33:34.910: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:34.914: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:35.835: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:35.835: INFO: Pod daemon-set-b2v6q is not available May 15 14:33:35.835: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:35.840: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:36.834: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:36.835: INFO: Pod daemon-set-b2v6q is not available May 15 14:33:36.835: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:36.838: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:37.835: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:37.835: INFO: Pod daemon-set-b2v6q is not available May 15 14:33:37.835: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:37.839: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:38.838: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:38.838: INFO: Pod daemon-set-b2v6q is not available May 15 14:33:38.838: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:38.841: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:39.835: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:39.835: INFO: Pod daemon-set-b2v6q is not available May 15 14:33:39.835: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:39.840: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:40.835: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:40.835: INFO: Pod daemon-set-b2v6q is not available May 15 14:33:40.835: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:40.838: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:41.836: INFO: Wrong image for pod: daemon-set-b2v6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:41.836: INFO: Pod daemon-set-b2v6q is not available May 15 14:33:41.836: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:41.840: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:42.834: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:42.835: INFO: Pod daemon-set-pzn5h is not available May 15 14:33:42.838: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:43.836: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:43.836: INFO: Pod daemon-set-pzn5h is not available May 15 14:33:43.840: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:44.922: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:44.922: INFO: Pod daemon-set-pzn5h is not available May 15 14:33:44.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:45.834: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:45.834: INFO: Pod daemon-set-pzn5h is not available May 15 14:33:45.838: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:46.834: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:46.837: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:47.835: INFO: Wrong image for pod: daemon-set-l4kpr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 15 14:33:47.835: INFO: Pod daemon-set-l4kpr is not available May 15 14:33:47.839: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:48.835: INFO: Pod daemon-set-xjt28 is not available May 15 14:33:48.839: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 15 14:33:48.842: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:48.845: INFO: Number of nodes with available pods: 1 May 15 14:33:48.845: INFO: Node iruya-worker is running more than one daemon pod May 15 14:33:49.875: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:49.879: INFO: Number of nodes with available pods: 1 May 15 14:33:49.879: INFO: Node iruya-worker is running more than one daemon pod May 15 14:33:50.860: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:50.871: INFO: Number of nodes with available pods: 1 May 15 14:33:50.871: INFO: Node iruya-worker is running more than one daemon pod May 15 14:33:51.850: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 14:33:51.854: INFO: Number of nodes with available pods: 2 May 15 14:33:51.854: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8259, will wait for the garbage collector to delete the pods May 15 14:33:51.928: INFO: Deleting DaemonSet.extensions daemon-set took: 7.007775ms May 15 14:33:52.228: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.26625ms May 15 14:34:02.232: INFO: Number of nodes with available pods: 0 May 15 14:34:02.232: INFO: Number of running nodes: 0, number of available pods: 0 May 15 14:34:02.234: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8259/daemonsets","resourceVersion":"11052450"},"items":null} May 15 14:34:02.236: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8259/pods","resourceVersion":"11052450"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:34:02.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8259" for this suite. May 15 14:34:08.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:34:08.331: INFO: namespace daemonsets-8259 deletion completed in 6.082529745s • [SLOW TEST:44.701 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:34:08.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 15 14:34:08.396: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4a6d648-a160-434b-98e0-d4db772a3ac1" in namespace "projected-9327" to be "success or failure" May 15 14:34:08.400: INFO: Pod "downwardapi-volume-c4a6d648-a160-434b-98e0-d4db772a3ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.421265ms May 15 14:34:10.403: INFO: Pod "downwardapi-volume-c4a6d648-a160-434b-98e0-d4db772a3ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006535809s May 15 14:34:12.407: INFO: Pod "downwardapi-volume-c4a6d648-a160-434b-98e0-d4db772a3ac1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010679259s STEP: Saw pod success May 15 14:34:12.407: INFO: Pod "downwardapi-volume-c4a6d648-a160-434b-98e0-d4db772a3ac1" satisfied condition "success or failure" May 15 14:34:12.410: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c4a6d648-a160-434b-98e0-d4db772a3ac1 container client-container: STEP: delete the pod May 15 14:34:12.431: INFO: Waiting for pod downwardapi-volume-c4a6d648-a160-434b-98e0-d4db772a3ac1 to disappear May 15 14:34:12.435: INFO: Pod downwardapi-volume-c4a6d648-a160-434b-98e0-d4db772a3ac1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:34:12.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9327" for this suite. May 15 14:34:18.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:34:18.524: INFO: namespace projected-9327 deletion completed in 6.085519857s • [SLOW TEST:10.193 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:34:18.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-c3daafbd-6c95-47ee-bb81-a7832b3eb827 STEP: Creating a pod to test consume configMaps May 15 14:34:18.612: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aef85654-ef01-4703-8bc0-9c9e13a811f6" in namespace "projected-7585" to be "success or failure" May 15 14:34:18.616: INFO: Pod "pod-projected-configmaps-aef85654-ef01-4703-8bc0-9c9e13a811f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.859013ms May 15 14:34:20.619: INFO: Pod "pod-projected-configmaps-aef85654-ef01-4703-8bc0-9c9e13a811f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007358264s May 15 14:34:22.622: INFO: Pod "pod-projected-configmaps-aef85654-ef01-4703-8bc0-9c9e13a811f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010419805s STEP: Saw pod success May 15 14:34:22.622: INFO: Pod "pod-projected-configmaps-aef85654-ef01-4703-8bc0-9c9e13a811f6" satisfied condition "success or failure" May 15 14:34:22.624: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-aef85654-ef01-4703-8bc0-9c9e13a811f6 container projected-configmap-volume-test: STEP: delete the pod May 15 14:34:22.640: INFO: Waiting for pod pod-projected-configmaps-aef85654-ef01-4703-8bc0-9c9e13a811f6 to disappear May 15 14:34:22.645: INFO: Pod pod-projected-configmaps-aef85654-ef01-4703-8bc0-9c9e13a811f6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:34:22.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7585" for this suite. May 15 14:34:28.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:34:28.781: INFO: namespace projected-7585 deletion completed in 6.134337583s • [SLOW TEST:10.257 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:34:28.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 14:34:32.912: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:34:32.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5874" for this suite. May 15 14:34:39.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:34:39.134: INFO: namespace container-runtime-5874 deletion completed in 6.191958474s • [SLOW TEST:10.352 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:34:39.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4dab50a2-f00e-492b-94fc-9fc120a528a0 STEP: Creating a pod to test consume secrets May 15 14:34:39.206: INFO: Waiting up to 5m0s for pod "pod-secrets-3d5d1d59-898b-4d00-89b7-29f042316f84" in namespace "secrets-2312" to be "success or failure" May 15 14:34:39.210: INFO: Pod "pod-secrets-3d5d1d59-898b-4d00-89b7-29f042316f84": Phase="Pending", Reason="", readiness=false. Elapsed: 3.730402ms May 15 14:34:41.243: INFO: Pod "pod-secrets-3d5d1d59-898b-4d00-89b7-29f042316f84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037078328s May 15 14:34:43.513: INFO: Pod "pod-secrets-3d5d1d59-898b-4d00-89b7-29f042316f84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.306867364s STEP: Saw pod success May 15 14:34:43.513: INFO: Pod "pod-secrets-3d5d1d59-898b-4d00-89b7-29f042316f84" satisfied condition "success or failure" May 15 14:34:43.516: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-3d5d1d59-898b-4d00-89b7-29f042316f84 container secret-env-test: STEP: delete the pod May 15 14:34:44.152: INFO: Waiting for pod pod-secrets-3d5d1d59-898b-4d00-89b7-29f042316f84 to disappear May 15 14:34:44.169: INFO: Pod pod-secrets-3d5d1d59-898b-4d00-89b7-29f042316f84 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:34:44.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2312" for this suite. May 15 14:34:50.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:34:50.308: INFO: namespace secrets-2312 deletion completed in 6.132371269s • [SLOW TEST:11.173 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:34:50.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 15 14:34:50.441: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9417,SelfLink:/api/v1/namespaces/watch-9417/configmaps/e2e-watch-test-watch-closed,UID:9f8b764e-be84-49e1-9b79-0a5c36e0e52a,ResourceVersion:11052674,Generation:0,CreationTimestamp:2020-05-15 14:34:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 15 14:34:50.441: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9417,SelfLink:/api/v1/namespaces/watch-9417/configmaps/e2e-watch-test-watch-closed,UID:9f8b764e-be84-49e1-9b79-0a5c36e0e52a,ResourceVersion:11052675,Generation:0,CreationTimestamp:2020-05-15 14:34:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 15 14:34:50.491: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9417,SelfLink:/api/v1/namespaces/watch-9417/configmaps/e2e-watch-test-watch-closed,UID:9f8b764e-be84-49e1-9b79-0a5c36e0e52a,ResourceVersion:11052676,Generation:0,CreationTimestamp:2020-05-15 14:34:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 15 14:34:50.492: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9417,SelfLink:/api/v1/namespaces/watch-9417/configmaps/e2e-watch-test-watch-closed,UID:9f8b764e-be84-49e1-9b79-0a5c36e0e52a,ResourceVersion:11052677,Generation:0,CreationTimestamp:2020-05-15 14:34:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:34:50.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9417" for this suite. May 15 14:34:56.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:34:56.592: INFO: namespace watch-9417 deletion completed in 6.080532391s • [SLOW TEST:6.284 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:34:56.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 15 14:35:01.200: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f30d9efb-d77d-48b7-8ff8-63536fad5d96" May 15 14:35:01.200: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f30d9efb-d77d-48b7-8ff8-63536fad5d96" in namespace "pods-7224" to be "terminated due to deadline exceeded" May 15 14:35:01.265: INFO: Pod "pod-update-activedeadlineseconds-f30d9efb-d77d-48b7-8ff8-63536fad5d96": Phase="Running", Reason="", readiness=true. Elapsed: 65.547284ms May 15 14:35:03.269: INFO: Pod "pod-update-activedeadlineseconds-f30d9efb-d77d-48b7-8ff8-63536fad5d96": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.069435024s May 15 14:35:03.269: INFO: Pod "pod-update-activedeadlineseconds-f30d9efb-d77d-48b7-8ff8-63536fad5d96" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:35:03.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7224" for this suite. May 15 14:35:09.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:35:09.419: INFO: namespace pods-7224 deletion completed in 6.145737838s • [SLOW TEST:12.827 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:35:09.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-94d5afee-d35b-453d-8748-1efcd59b1f83 STEP: Creating a pod to test consume configMaps May 15 14:35:09.524: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6789859f-195e-4df5-9e4b-8fff83ea4d30" in namespace "projected-3328" to be "success or failure" May 15 14:35:09.528: INFO: Pod "pod-projected-configmaps-6789859f-195e-4df5-9e4b-8fff83ea4d30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.684946ms May 15 14:35:11.532: INFO: Pod "pod-projected-configmaps-6789859f-195e-4df5-9e4b-8fff83ea4d30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00767155s May 15 14:35:13.536: INFO: Pod "pod-projected-configmaps-6789859f-195e-4df5-9e4b-8fff83ea4d30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011629311s STEP: Saw pod success May 15 14:35:13.536: INFO: Pod "pod-projected-configmaps-6789859f-195e-4df5-9e4b-8fff83ea4d30" satisfied condition "success or failure" May 15 14:35:13.539: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-6789859f-195e-4df5-9e4b-8fff83ea4d30 container projected-configmap-volume-test: STEP: delete the pod May 15 14:35:13.689: INFO: Waiting for pod pod-projected-configmaps-6789859f-195e-4df5-9e4b-8fff83ea4d30 to disappear May 15 14:35:13.783: INFO: Pod pod-projected-configmaps-6789859f-195e-4df5-9e4b-8fff83ea4d30 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:35:13.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3328" for this suite. May 15 14:35:19.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:35:19.991: INFO: namespace projected-3328 deletion completed in 6.204668685s • [SLOW TEST:10.572 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:35:19.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-98d4ed4e-e4d6-4746-8ff7-da4ebf9e53bd [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:35:20.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9765" for this suite. May 15 14:35:26.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:35:26.128: INFO: namespace secrets-9765 deletion completed in 6.074984109s • [SLOW TEST:6.137 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:35:26.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-c5795268-0c93-4058-94c7-0662ee61bb59 STEP: Creating a pod to test consume configMaps May 15 14:35:26.502: INFO: Waiting up to 5m0s for pod "pod-configmaps-7604c640-fdf4-4844-91bb-2a0b42624892" in namespace "configmap-7699" to be "success or failure" May 15 14:35:26.531: INFO: Pod "pod-configmaps-7604c640-fdf4-4844-91bb-2a0b42624892": Phase="Pending", Reason="", readiness=false. Elapsed: 29.146433ms May 15 14:35:28.682: INFO: Pod "pod-configmaps-7604c640-fdf4-4844-91bb-2a0b42624892": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180614089s May 15 14:35:30.687: INFO: Pod "pod-configmaps-7604c640-fdf4-4844-91bb-2a0b42624892": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.184931574s STEP: Saw pod success May 15 14:35:30.687: INFO: Pod "pod-configmaps-7604c640-fdf4-4844-91bb-2a0b42624892" satisfied condition "success or failure" May 15 14:35:30.690: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7604c640-fdf4-4844-91bb-2a0b42624892 container configmap-volume-test: STEP: delete the pod May 15 14:35:30.743: INFO: Waiting for pod pod-configmaps-7604c640-fdf4-4844-91bb-2a0b42624892 to disappear May 15 14:35:30.752: INFO: Pod pod-configmaps-7604c640-fdf4-4844-91bb-2a0b42624892 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:35:30.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7699" for this suite. May 15 14:35:36.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:35:37.027: INFO: namespace configmap-7699 deletion completed in 6.272005421s • [SLOW TEST:10.898 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:35:37.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:35:37.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-379" for this suite. May 15 14:35:43.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:35:43.263: INFO: namespace kubelet-test-379 deletion completed in 6.084917766s • [SLOW TEST:6.236 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:35:43.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-92334943-4104-45f1-97f4-e9d197e140e4 STEP: Creating secret with name s-test-opt-upd-117a983e-2e4d-45cf-a8b7-192618c96754 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-92334943-4104-45f1-97f4-e9d197e140e4 STEP: Updating secret s-test-opt-upd-117a983e-2e4d-45cf-a8b7-192618c96754 STEP: Creating secret with name s-test-opt-create-309a38cf-84f0-4a23-bbe9-54a94a16334c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:37:20.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-716" for this suite. May 15 14:37:42.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:37:42.829: INFO: namespace projected-716 deletion completed in 22.257040451s • [SLOW TEST:119.566 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:37:42.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5375.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5375.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5375.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5375.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 69.40.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.40.69_udp@PTR;check="$$(dig +tcp +noall +answer +search 69.40.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.40.69_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5375.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5375.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5375.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5375.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5375.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 69.40.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.40.69_udp@PTR;check="$$(dig +tcp +noall +answer +search 69.40.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.40.69_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 14:37:51.171: INFO: Unable to read wheezy_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:51.174: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:51.177: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:51.180: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:51.235: INFO: Unable to read jessie_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:51.239: INFO: Unable to read jessie_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:51.242: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:51.246: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:51.264: INFO: Lookups using dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df failed for: [wheezy_udp@dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_udp@dns-test-service.dns-5375.svc.cluster.local jessie_tcp@dns-test-service.dns-5375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local] May 15 14:37:56.270: INFO: Unable to read wheezy_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:56.273: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:56.276: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:56.279: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:56.299: INFO: Unable to read jessie_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:56.301: INFO: Unable to read jessie_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:56.304: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:56.307: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:37:56.323: INFO: Lookups using dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df failed for: [wheezy_udp@dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_udp@dns-test-service.dns-5375.svc.cluster.local jessie_tcp@dns-test-service.dns-5375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local] May 15 14:38:01.270: INFO: Unable to read wheezy_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:01.274: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:01.277: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:01.280: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:01.297: INFO: Unable to read jessie_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:01.300: INFO: Unable to read jessie_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:01.303: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:01.306: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:01.323: INFO: Lookups using dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df failed for: [wheezy_udp@dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_udp@dns-test-service.dns-5375.svc.cluster.local jessie_tcp@dns-test-service.dns-5375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local] May 15 14:38:06.269: INFO: Unable to read wheezy_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:06.272: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:06.276: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:06.279: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:06.314: INFO: Unable to read jessie_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:06.316: INFO: Unable to read jessie_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:06.319: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:06.322: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:06.337: INFO: Lookups using dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df failed for: [wheezy_udp@dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_udp@dns-test-service.dns-5375.svc.cluster.local jessie_tcp@dns-test-service.dns-5375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local] May 15 14:38:11.270: INFO: Unable to read wheezy_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:11.273: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:11.277: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:11.280: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:11.303: INFO: Unable to read jessie_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:11.306: INFO: Unable to read jessie_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:11.310: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:11.313: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:11.328: INFO: Lookups using dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df failed for: [wheezy_udp@dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_udp@dns-test-service.dns-5375.svc.cluster.local jessie_tcp@dns-test-service.dns-5375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local] May 15 14:38:16.269: INFO: Unable to read wheezy_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:16.272: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:16.275: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:16.277: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:16.293: INFO: Unable to read jessie_udp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:16.296: INFO: Unable to read jessie_tcp@dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:16.298: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:16.301: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local from pod dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df: the server could not find the requested resource (get pods dns-test-9e5cb05b-c157-47df-9476-a8a5939862df) May 15 14:38:16.316: INFO: Lookups using dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df failed for: [wheezy_udp@dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@dns-test-service.dns-5375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_udp@dns-test-service.dns-5375.svc.cluster.local jessie_tcp@dns-test-service.dns-5375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5375.svc.cluster.local] May 15 14:38:21.319: INFO: DNS probes using dns-5375/dns-test-9e5cb05b-c157-47df-9476-a8a5939862df succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:38:22.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5375" for this suite. May 15 14:38:28.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:38:28.283: INFO: namespace dns-5375 deletion completed in 6.104047723s • [SLOW TEST:45.453 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:38:28.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-662a6f00-d8c2-4f6f-b95c-f45b6f7aa264 in namespace container-probe-244 May 15 14:38:32.374: INFO: Started pod test-webserver-662a6f00-d8c2-4f6f-b95c-f45b6f7aa264 in namespace container-probe-244 STEP: checking the pod's current state and verifying that restartCount is present May 15 14:38:32.376: INFO: Initial restart count of pod test-webserver-662a6f00-d8c2-4f6f-b95c-f45b6f7aa264 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:42:33.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-244" for this suite. May 15 14:42:39.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:42:39.705: INFO: namespace container-probe-244 deletion completed in 6.188691311s • [SLOW TEST:251.421 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:42:39.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 15 14:42:39.763: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 15 14:42:40.375: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 15 14:42:42.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725150560, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725150560, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725150560, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725150560, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 14:42:44.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725150560, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725150560, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725150560, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725150560, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 14:42:47.446: INFO: Waited 625.796663ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:42:47.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7168" for this suite. May 15 14:42:54.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:42:54.080: INFO: namespace aggregator-7168 deletion completed in 6.097427185s • [SLOW TEST:14.374 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:42:54.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 15 14:42:54.160: INFO: Waiting up to 5m0s for pod "downward-api-eb992cdc-388f-4ebb-9ff3-c18216507867" in namespace "downward-api-3044" to be "success or failure" May 15 14:42:54.178: INFO: Pod "downward-api-eb992cdc-388f-4ebb-9ff3-c18216507867": Phase="Pending", Reason="", readiness=false. Elapsed: 18.471895ms May 15 14:42:56.487: INFO: Pod "downward-api-eb992cdc-388f-4ebb-9ff3-c18216507867": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327659752s May 15 14:42:58.491: INFO: Pod "downward-api-eb992cdc-388f-4ebb-9ff3-c18216507867": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331163959s STEP: Saw pod success May 15 14:42:58.491: INFO: Pod "downward-api-eb992cdc-388f-4ebb-9ff3-c18216507867" satisfied condition "success or failure" May 15 14:42:58.493: INFO: Trying to get logs from node iruya-worker pod downward-api-eb992cdc-388f-4ebb-9ff3-c18216507867 container dapi-container: STEP: delete the pod May 15 14:42:58.678: INFO: Waiting for pod downward-api-eb992cdc-388f-4ebb-9ff3-c18216507867 to disappear May 15 14:42:58.705: INFO: Pod downward-api-eb992cdc-388f-4ebb-9ff3-c18216507867 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:42:58.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3044" for this suite. May 15 14:43:04.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:43:04.852: INFO: namespace downward-api-3044 deletion completed in 6.119133548s • [SLOW TEST:10.771 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:43:04.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8328, will wait for the garbage collector to delete the pods May 15 14:43:11.015: INFO: Deleting Job.batch foo took: 5.493232ms May 15 14:43:11.315: INFO: Terminating Job.batch foo pods took: 300.214584ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:43:44.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8328" for this suite. May 15 14:43:50.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:43:50.754: INFO: namespace job-8328 deletion completed in 6.129424616s • [SLOW TEST:45.902 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:43:50.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 15 14:43:50.811: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:44:02.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-863" for this suite. May 15 14:44:08.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:44:08.279: INFO: namespace pods-863 deletion completed in 6.109302533s • [SLOW TEST:17.525 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:44:08.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 15 14:44:08.358: INFO: Waiting up to 5m0s for pod "pod-565b3ddb-8f72-4cc9-b621-a24065d2f3df" in namespace "emptydir-4533" to be "success or failure" May 15 14:44:08.365: INFO: Pod "pod-565b3ddb-8f72-4cc9-b621-a24065d2f3df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.842658ms May 15 14:44:10.718: INFO: Pod "pod-565b3ddb-8f72-4cc9-b621-a24065d2f3df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360016954s May 15 14:44:12.722: INFO: Pod "pod-565b3ddb-8f72-4cc9-b621-a24065d2f3df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.363372804s STEP: Saw pod success May 15 14:44:12.722: INFO: Pod "pod-565b3ddb-8f72-4cc9-b621-a24065d2f3df" satisfied condition "success or failure" May 15 14:44:12.724: INFO: Trying to get logs from node iruya-worker pod pod-565b3ddb-8f72-4cc9-b621-a24065d2f3df container test-container: STEP: delete the pod May 15 14:44:12.757: INFO: Waiting for pod pod-565b3ddb-8f72-4cc9-b621-a24065d2f3df to disappear May 15 14:44:12.760: INFO: Pod pod-565b3ddb-8f72-4cc9-b621-a24065d2f3df no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:44:12.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4533" for this suite. May 15 14:44:18.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:44:18.937: INFO: namespace emptydir-4533 deletion completed in 6.174629801s • [SLOW TEST:10.658 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:44:18.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 15 14:44:18.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3159' May 15 14:44:22.074: INFO: stderr: "" May 15 14:44:22.074: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 14:44:22.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3159' May 15 14:44:22.235: INFO: stderr: "" May 15 14:44:22.235: INFO: stdout: "update-demo-nautilus-2r5vm update-demo-nautilus-xkb27 " May 15 14:44:22.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2r5vm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3159' May 15 14:44:22.334: INFO: stderr: "" May 15 14:44:22.334: INFO: stdout: "" May 15 14:44:22.334: INFO: update-demo-nautilus-2r5vm is created but not running May 15 14:44:27.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3159' May 15 14:44:27.437: INFO: stderr: "" May 15 14:44:27.437: INFO: stdout: "update-demo-nautilus-2r5vm update-demo-nautilus-xkb27 " May 15 14:44:27.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2r5vm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3159' May 15 14:44:27.523: INFO: stderr: "" May 15 14:44:27.523: INFO: stdout: "true" May 15 14:44:27.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2r5vm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3159' May 15 14:44:27.622: INFO: stderr: "" May 15 14:44:27.622: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 14:44:27.622: INFO: validating pod update-demo-nautilus-2r5vm May 15 14:44:27.626: INFO: got data: { "image": "nautilus.jpg" } May 15 14:44:27.626: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 14:44:27.626: INFO: update-demo-nautilus-2r5vm is verified up and running May 15 14:44:27.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xkb27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3159' May 15 14:44:27.714: INFO: stderr: "" May 15 14:44:27.714: INFO: stdout: "true" May 15 14:44:27.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xkb27 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3159' May 15 14:44:27.806: INFO: stderr: "" May 15 14:44:27.806: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 14:44:27.806: INFO: validating pod update-demo-nautilus-xkb27 May 15 14:44:27.810: INFO: got data: { "image": "nautilus.jpg" } May 15 14:44:27.810: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 14:44:27.810: INFO: update-demo-nautilus-xkb27 is verified up and running STEP: using delete to clean up resources May 15 14:44:27.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3159' May 15 14:44:27.903: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 14:44:27.903: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 15 14:44:27.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3159' May 15 14:44:28.003: INFO: stderr: "No resources found.\n" May 15 14:44:28.003: INFO: stdout: "" May 15 14:44:28.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3159 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 14:44:28.099: INFO: stderr: "" May 15 14:44:28.099: INFO: stdout: "update-demo-nautilus-2r5vm\nupdate-demo-nautilus-xkb27\n" May 15 14:44:28.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3159' May 15 14:44:28.700: INFO: stderr: "No resources found.\n" May 15 14:44:28.700: INFO: stdout: "" May 15 14:44:28.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3159 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 14:44:28.799: INFO: stderr: "" May 15 14:44:28.799: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:44:28.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3159" for this suite. May 15 14:44:48.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:44:48.946: INFO: namespace kubectl-3159 deletion completed in 20.142699625s • [SLOW TEST:30.008 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:44:48.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 15 14:44:49.041: INFO: Waiting up to 5m0s for pod "client-containers-5f107446-0400-4661-8683-0d3cf834cc08" in namespace "containers-3955" to be "success or failure" May 15 14:44:49.043: INFO: Pod "client-containers-5f107446-0400-4661-8683-0d3cf834cc08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020487ms May 15 14:44:51.138: INFO: Pod "client-containers-5f107446-0400-4661-8683-0d3cf834cc08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096607177s May 15 14:44:53.142: INFO: Pod "client-containers-5f107446-0400-4661-8683-0d3cf834cc08": Phase="Running", Reason="", readiness=true. Elapsed: 4.101158141s May 15 14:44:55.147: INFO: Pod "client-containers-5f107446-0400-4661-8683-0d3cf834cc08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105930325s STEP: Saw pod success May 15 14:44:55.147: INFO: Pod "client-containers-5f107446-0400-4661-8683-0d3cf834cc08" satisfied condition "success or failure" May 15 14:44:55.151: INFO: Trying to get logs from node iruya-worker2 pod client-containers-5f107446-0400-4661-8683-0d3cf834cc08 container test-container: STEP: delete the pod May 15 14:44:55.187: INFO: Waiting for pod client-containers-5f107446-0400-4661-8683-0d3cf834cc08 to disappear May 15 14:44:55.200: INFO: Pod client-containers-5f107446-0400-4661-8683-0d3cf834cc08 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:44:55.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3955" for this suite. May 15 14:45:01.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:45:01.284: INFO: namespace containers-3955 deletion completed in 6.079656006s • [SLOW TEST:12.338 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 15 14:45:01.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 14:45:06.412: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 15 14:45:06.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-137" for this suite. May 15 14:45:12.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 15 14:45:12.597: INFO: namespace container-runtime-137 deletion completed in 6.083531213s • [SLOW TEST:11.312 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSMay 15 14:45:12.597: INFO: Running AfterSuite actions on all nodes May 15 14:45:12.597: INFO: Running AfterSuite actions on node 1 May 15 14:45:12.597: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769 Ran 215 of 4412 Specs in 6558.051 seconds FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped --- FAIL: TestE2E (6558.24s) FAIL