I1222 12:56:12.000775 8 e2e.go:243] Starting e2e run "7a2bb7a1-b7f7-44e5-a2e3-2b4959765b28" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577019370 - Will randomize all specs Will run 215 of 4412 specs Dec 22 12:56:12.317: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:56:12.319: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 22 12:56:12.343: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 22 12:56:12.368: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 22 12:56:12.368: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 22 12:56:12.368: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 22 12:56:12.375: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 22 12:56:12.376: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 22 12:56:12.376: INFO: e2e test version: v1.15.7 Dec 22 12:56:12.377: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 12:56:12.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy Dec 22 12:56:12.577: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 22 12:56:12.649: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log alternatives.l... (200; 61.608534ms) Dec 22 12:56:12.666: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 16.324544ms) Dec 22 12:56:12.709: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 42.918683ms) Dec 22 12:56:12.719: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 10.635917ms) Dec 22 12:56:12.726: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 6.301823ms) Dec 22 12:56:12.732: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 6.008626ms) Dec 22 12:56:12.739: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 7.412837ms) Dec 22 12:56:12.745: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 5.95014ms) Dec 22 12:56:12.749: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 4.079051ms) Dec 22 12:56:12.757: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 8.040498ms) Dec 22 12:56:12.762: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 4.714542ms) Dec 22 12:56:12.767: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 5.239169ms) Dec 22 12:56:12.773: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 5.794552ms) Dec 22 12:56:12.777: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.888427ms) Dec 22 12:56:12.781: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 4.102088ms) Dec 22 12:56:12.785: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.477348ms) Dec 22 12:56:12.790: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 4.833098ms) Dec 22 12:56:12.793: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.879021ms) Dec 22 12:56:12.797: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.563607ms) Dec 22 12:56:12.801: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.824856ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 12:56:12.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9804" for this suite. Dec 22 12:56:18.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:56:18.960: INFO: namespace proxy-9804 deletion completed in 6.156097911s • [SLOW TEST:6.584 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 12:56:18.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-2798b649-f35d-4c61-b4a5-6b9a84b7f9eb STEP: Creating secret with name s-test-opt-upd-5c029714-8cee-4437-8f2e-e94b59f791a9 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2798b649-f35d-4c61-b4a5-6b9a84b7f9eb STEP: Updating secret s-test-opt-upd-5c029714-8cee-4437-8f2e-e94b59f791a9 STEP: Creating secret with name s-test-opt-create-1b2e8725-b0c8-4723-b255-663b8411c778 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 12:56:35.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-496" for this suite. Dec 22 12:56:59.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:56:59.832: INFO: namespace projected-496 deletion completed in 24.205342038s • [SLOW TEST:40.871 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 12:56:59.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-bjxn STEP: Creating a pod to test atomic-volume-subpath Dec 22 12:57:00.009: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bjxn" in namespace "subpath-7955" to be "success or failure" Dec 22 12:57:00.028: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 19.599858ms Dec 22 12:57:02.040: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030854334s Dec 22 12:57:04.061: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052092774s Dec 22 12:57:06.073: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064078951s Dec 22 12:57:08.080: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07062104s Dec 22 12:57:10.085: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075806485s Dec 22 12:57:12.094: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 12.084875064s Dec 22 12:57:14.102: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 14.092730831s Dec 22 12:57:16.109: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 16.100411063s Dec 22 12:57:18.118: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 18.109228651s Dec 22 12:57:20.131: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 20.121893984s Dec 22 12:57:25.423: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 25.414495028s Dec 22 12:57:27.431: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 27.422407063s Dec 22 12:57:29.440: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 29.43091811s Dec 22 12:57:31.472: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Running", Reason="", readiness=true. Elapsed: 31.463599561s Dec 22 12:57:33.646: INFO: Pod "pod-subpath-test-configmap-bjxn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.637555239s STEP: Saw pod success Dec 22 12:57:33.647: INFO: Pod "pod-subpath-test-configmap-bjxn" satisfied condition "success or failure" Dec 22 12:57:33.651: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-bjxn container test-container-subpath-configmap-bjxn:STEP: delete the pod Dec 22 12:57:33.773: INFO: Waiting for pod pod-subpath-test-configmap-bjxn to disappear Dec 22 12:57:33.780: INFO: Pod pod-subpath-test-configmap-bjxn no longer exists STEP: Deleting pod pod-subpath-test-configmap-bjxn Dec 22 12:57:33.780: INFO: Deleting pod "pod-subpath-test-configmap-bjxn" in namespace "subpath-7955" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 12:57:33.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7955" for this suite. Dec 22 12:57:39.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:57:39.973: INFO: namespace subpath-7955 deletion completed in 6.181633578s • [SLOW TEST:40.141 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 12:57:39.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 22 12:57:40.168: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53" in namespace "downward-api-444" to be "success or failure" Dec 22 12:57:40.220: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Pending", Reason="", readiness=false. Elapsed: 51.693487ms Dec 22 12:57:42.229: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060125749s Dec 22 12:57:44.247: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078112755s Dec 22 12:57:46.255: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08619507s Dec 22 12:57:48.264: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095763235s Dec 22 12:57:50.273: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104820429s STEP: Saw pod success Dec 22 12:57:50.273: INFO: Pod "downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53" satisfied condition "success or failure" Dec 22 12:57:50.278: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53 container client-container: STEP: delete the pod Dec 22 12:57:50.381: INFO: Waiting for pod downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53 to disappear Dec 22 12:57:50.392: INFO: Pod downwardapi-volume-bc12ce02-8fac-4d6c-91d4-6dc4a7d2aa53 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 12:57:50.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-444" for this suite. Dec 22 12:57:56.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:57:56.517: INFO: namespace downward-api-444 deletion completed in 6.116340589s • [SLOW TEST:16.544 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 12:57:56.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 22 12:57:56.660: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb" in namespace "projected-4522" to be "success or failure" Dec 22 12:57:56.675: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.022015ms Dec 22 12:57:58.682: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02213503s Dec 22 12:58:00.694: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033655161s Dec 22 12:58:02.700: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040427426s Dec 22 12:58:05.654: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.994101265s Dec 22 12:58:07.673: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.012899845s Dec 22 12:58:09.686: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.025938363s STEP: Saw pod success Dec 22 12:58:09.686: INFO: Pod "downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb" satisfied condition "success or failure" Dec 22 12:58:09.696: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb container client-container: STEP: delete the pod Dec 22 12:58:09.922: INFO: Waiting for pod downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb to disappear Dec 22 12:58:10.056: INFO: Pod downwardapi-volume-f8e06bc7-eaff-4ddb-adb3-a1f26a1d2cdb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 12:58:10.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4522" for this suite. Dec 22 12:58:16.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:58:16.253: INFO: namespace projected-4522 deletion completed in 6.160428696s • [SLOW TEST:19.735 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 12:58:16.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 22 12:58:16.389: INFO: Waiting up to 5m0s for pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160" in namespace "emptydir-2338" to be "success or failure" Dec 22 12:58:16.399: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 10.152654ms Dec 22 12:58:18.407: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018155077s Dec 22 12:58:20.494: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104649349s Dec 22 12:58:22.504: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114259897s Dec 22 12:58:24.513: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124165788s Dec 22 12:58:26.523: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Pending", Reason="", readiness=false. Elapsed: 10.133310243s Dec 22 12:58:28.538: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.148391657s STEP: Saw pod success Dec 22 12:58:28.538: INFO: Pod "pod-9c677681-08ba-4864-9da3-dc4fbe5ae160" satisfied condition "success or failure" Dec 22 12:58:28.545: INFO: Trying to get logs from node iruya-node pod pod-9c677681-08ba-4864-9da3-dc4fbe5ae160 container test-container: STEP: delete the pod Dec 22 12:58:28.849: INFO: Waiting for pod pod-9c677681-08ba-4864-9da3-dc4fbe5ae160 to disappear Dec 22 12:58:28.935: INFO: Pod pod-9c677681-08ba-4864-9da3-dc4fbe5ae160 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 12:58:28.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2338" for this suite. Dec 22 12:58:34.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:58:35.062: INFO: namespace emptydir-2338 deletion completed in 6.119262614s • [SLOW TEST:18.809 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 12:58:35.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Dec 22 12:58:57.353: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 12:58:57.353: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:58:57.705: INFO: Exec stderr: "" Dec 22 12:58:57.705: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 12:58:57.706: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:58:58.148: INFO: Exec stderr: "" Dec 22 12:58:58.148: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 12:58:58.148: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:58:58.551: INFO: Exec stderr: "" Dec 22 12:58:58.551: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 12:58:58.551: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:58:58.803: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Dec 22 12:58:58.803: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 12:58:58.803: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:58:59.039: INFO: Exec stderr: "" Dec 22 12:58:59.039: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 12:58:59.039: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:58:59.294: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Dec 22 12:58:59.294: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 12:58:59.294: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:58:59.554: INFO: Exec stderr: "" Dec 22 12:58:59.554: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 12:58:59.554: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:58:59.893: INFO: Exec stderr: "" Dec 22 12:58:59.893: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 12:58:59.893: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:59:00.192: INFO: Exec stderr: "" Dec 22 12:59:00.192: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4024 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 12:59:00.192: INFO: >>> kubeConfig: /root/.kube/config Dec 22 12:59:00.485: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 12:59:00.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4024" for this suite. Dec 22 12:59:52.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:59:52.666: INFO: namespace e2e-kubelet-etc-hosts-4024 deletion completed in 52.166974054s • [SLOW TEST:77.604 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 12:59:52.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Dec 22 12:59:52.811: INFO: Waiting up to 5m0s for pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33" in namespace "containers-7837" to be "success or failure" Dec 22 12:59:52.864: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 52.401959ms Dec 22 12:59:54.880: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068937937s Dec 22 12:59:56.891: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079904642s Dec 22 12:59:58.906: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094451318s Dec 22 13:00:00.919: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10727008s Dec 22 13:00:02.978: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166159824s Dec 22 13:00:04.997: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.185231934s STEP: Saw pod success Dec 22 13:00:04.997: INFO: Pod "client-containers-f87dfc08-d09a-473e-b90a-25896981af33" satisfied condition "success or failure" Dec 22 13:00:05.005: INFO: Trying to get logs from node iruya-node pod client-containers-f87dfc08-d09a-473e-b90a-25896981af33 container test-container: STEP: delete the pod Dec 22 13:00:05.112: INFO: Waiting for pod client-containers-f87dfc08-d09a-473e-b90a-25896981af33 to disappear Dec 22 13:00:05.184: INFO: Pod client-containers-f87dfc08-d09a-473e-b90a-25896981af33 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:00:05.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7837" for this suite. Dec 22 13:00:11.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:00:11.441: INFO: namespace containers-7837 deletion completed in 6.24798599s • [SLOW TEST:18.774 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:00:11.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 22 13:00:21.692: INFO: Waiting up to 5m0s for pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac" in namespace "pods-2392" to be "success or failure" Dec 22 13:00:21.714: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 22.321946ms Dec 22 13:00:23.721: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029412276s Dec 22 13:00:25.750: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05786698s Dec 22 13:00:27.757: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065193407s Dec 22 13:00:29.765: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073325961s Dec 22 13:00:31.780: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088483915s STEP: Saw pod success Dec 22 13:00:31.780: INFO: Pod "client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac" satisfied condition "success or failure" Dec 22 13:00:31.794: INFO: Trying to get logs from node iruya-node pod client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac container env3cont: STEP: delete the pod Dec 22 13:00:31.988: INFO: Waiting for pod client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac to disappear Dec 22 13:00:31.994: INFO: Pod client-envvars-d89f505e-d6ae-46b1-9fa0-7e667ff9a7ac no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:00:31.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2392" for this suite. Dec 22 13:01:14.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:01:14.286: INFO: namespace pods-2392 deletion completed in 42.279315834s • [SLOW TEST:62.845 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:01:14.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:01:26.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-308" for this suite. Dec 22 13:02:18.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:02:18.706: INFO: namespace kubelet-test-308 deletion completed in 52.167505744s • [SLOW TEST:64.420 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:02:18.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9fc9bd93-529e-44cc-84c0-cd5d84ad6a52 STEP: Creating a pod to test consume configMaps Dec 22 13:02:18.872: INFO: Waiting up to 5m0s for pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362" in namespace "configmap-1975" to be "success or failure" Dec 22 13:02:19.007: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 135.287324ms Dec 22 13:02:21.013: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140993735s Dec 22 13:02:23.021: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149107897s Dec 22 13:02:25.027: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155310991s Dec 22 13:02:27.071: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198432335s Dec 22 13:02:29.080: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207581015s Dec 22 13:02:31.085: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.213098937s STEP: Saw pod success Dec 22 13:02:31.085: INFO: Pod "pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362" satisfied condition "success or failure" Dec 22 13:02:31.088: INFO: Trying to get logs from node iruya-node pod pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362 container configmap-volume-test: STEP: delete the pod Dec 22 13:02:31.552: INFO: Waiting for pod pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362 to disappear Dec 22 13:02:31.571: INFO: Pod pod-configmaps-235a2b64-7423-4f70-afbb-07ec4fe37362 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:02:31.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1975" for this suite. Dec 22 13:02:37.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:02:38.055: INFO: namespace configmap-1975 deletion completed in 6.47559887s • [SLOW TEST:19.349 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:02:38.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 22 13:02:38.122: INFO: Waiting up to 5m0s for pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc" in namespace "emptydir-6157" to be "success or failure" Dec 22 13:02:38.275: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Pending", Reason="", readiness=false. Elapsed: 152.879103ms Dec 22 13:02:40.282: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160523748s Dec 22 13:02:42.290: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168197113s Dec 22 13:02:44.296: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174554731s Dec 22 13:02:46.304: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181899675s Dec 22 13:02:48.563: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.441050342s STEP: Saw pod success Dec 22 13:02:48.563: INFO: Pod "pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc" satisfied condition "success or failure" Dec 22 13:02:48.569: INFO: Trying to get logs from node iruya-node pod pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc container test-container: STEP: delete the pod Dec 22 13:02:48.782: INFO: Waiting for pod pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc to disappear Dec 22 13:02:48.802: INFO: Pod pod-fb1112a7-d1b9-48ad-94e9-9b4c455816fc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:02:48.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6157" for this suite. Dec 22 13:02:54.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:02:54.990: INFO: namespace emptydir-6157 deletion completed in 6.177949687s • [SLOW TEST:16.934 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:02:54.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ee32ee52-5296-4b4b-abd3-f4534ce6d50b STEP: Creating a pod to test consume secrets Dec 22 13:02:55.259: INFO: Waiting up to 5m0s for pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602" in namespace "secrets-9228" to be "success or failure" Dec 22 13:02:55.290: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Pending", Reason="", readiness=false. Elapsed: 30.747789ms Dec 22 13:02:57.304: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044712648s Dec 22 13:02:59.312: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052584758s Dec 22 13:03:01.318: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058193036s Dec 22 13:03:03.327: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068026425s Dec 22 13:03:05.335: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07558767s STEP: Saw pod success Dec 22 13:03:05.335: INFO: Pod "pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602" satisfied condition "success or failure" Dec 22 13:03:05.340: INFO: Trying to get logs from node iruya-node pod pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602 container secret-volume-test: STEP: delete the pod Dec 22 13:03:05.441: INFO: Waiting for pod pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602 to disappear Dec 22 13:03:05.458: INFO: Pod pod-secrets-f03a2f9a-1e0d-42c9-84d6-cb162f558602 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:03:05.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9228" for this suite. Dec 22 13:03:11.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:03:11.718: INFO: namespace secrets-9228 deletion completed in 6.253881381s STEP: Destroying namespace "secret-namespace-7949" for this suite. Dec 22 13:03:17.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:03:17.952: INFO: namespace secret-namespace-7949 deletion completed in 6.234104239s • [SLOW TEST:22.962 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:03:17.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 22 13:03:18.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb" in namespace "downward-api-3849" to be "success or failure" Dec 22 13:03:18.125: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 43.054033ms Dec 22 13:03:20.620: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.53856126s Dec 22 13:03:22.635: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553460108s Dec 22 13:03:24.654: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.572149531s Dec 22 13:03:26.667: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.585113965s Dec 22 13:03:28.672: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.590055442s Dec 22 13:03:30.677: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.595405799s STEP: Saw pod success Dec 22 13:03:30.677: INFO: Pod "downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb" satisfied condition "success or failure" Dec 22 13:03:30.680: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb container client-container: STEP: delete the pod Dec 22 13:03:31.025: INFO: Waiting for pod downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb to disappear Dec 22 13:03:31.033: INFO: Pod downwardapi-volume-31b6be78-aba8-4ec2-8e13-3a77b5bfacfb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:03:31.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3849" for this suite. Dec 22 13:03:37.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:03:37.300: INFO: namespace downward-api-3849 deletion completed in 6.25565923s • [SLOW TEST:19.348 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:03:37.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-7dc2b900-d97f-45d4-8be0-37e09d73b551 STEP: Creating a pod to test consume configMaps Dec 22 13:03:37.470: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f" in namespace "projected-4534" to be "success or failure" Dec 22 13:03:37.570: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 99.697211ms Dec 22 13:03:39.578: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10729245s Dec 22 13:03:41.600: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129263212s Dec 22 13:03:43.606: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135394893s Dec 22 13:03:45.613: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142970987s Dec 22 13:03:47.619: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148333122s STEP: Saw pod success Dec 22 13:03:47.619: INFO: Pod "pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f" satisfied condition "success or failure" Dec 22 13:03:47.623: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f container projected-configmap-volume-test: STEP: delete the pod Dec 22 13:03:48.380: INFO: Waiting for pod pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f to disappear Dec 22 13:03:48.385: INFO: Pod pod-projected-configmaps-c5c3e01b-68f0-4407-b4a2-04fecc383a9f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:03:48.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4534" for this suite. Dec 22 13:03:54.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:03:54.592: INFO: namespace projected-4534 deletion completed in 6.201917176s • [SLOW TEST:17.291 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:03:54.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 22 13:03:54.827: INFO: Waiting up to 5m0s for pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b" in namespace "emptydir-5078" to be "success or failure" Dec 22 13:03:54.836: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.05209ms Dec 22 13:03:56.841: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014047627s Dec 22 13:03:58.851: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024337853s Dec 22 13:04:00.864: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037602487s Dec 22 13:04:02.877: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050732889s Dec 22 13:04:04.885: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Running", Reason="", readiness=true. Elapsed: 10.0579374s Dec 22 13:04:06.895: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.068441751s STEP: Saw pod success Dec 22 13:04:06.895: INFO: Pod "pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b" satisfied condition "success or failure" Dec 22 13:04:06.903: INFO: Trying to get logs from node iruya-node pod pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b container test-container: STEP: delete the pod Dec 22 13:04:06.965: INFO: Waiting for pod pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b to disappear Dec 22 13:04:06.968: INFO: Pod pod-b8d9ae0e-0dc5-49e7-96d7-6523dc368a1b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:04:06.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5078" for this suite. Dec 22 13:04:15.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:04:15.177: INFO: namespace emptydir-5078 deletion completed in 8.201910328s • [SLOW TEST:20.585 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:04:15.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 22 13:04:15.336: INFO: Creating deployment "test-recreate-deployment" Dec 22 13:04:15.368: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 22 13:04:15.376: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Dec 22 13:04:17.515: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 22 13:04:17.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:04:19.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:04:21.533: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:04:23.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:04:25.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616655, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:04:27.528: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 22 13:04:27.576: INFO: Updating deployment test-recreate-deployment Dec 22 13:04:27.576: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 22 13:04:28.294: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4908,SelfLink:/apis/apps/v1/namespaces/deployment-4908/deployments/test-recreate-deployment,UID:3c2c135c-91ed-42f9-a013-c484fff08dcc,ResourceVersion:17636558,Generation:2,CreationTimestamp:2019-12-22 13:04:15 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-22 13:04:28 +0000 UTC 2019-12-22 13:04:28 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-22 13:04:28 +0000 UTC 2019-12-22 13:04:15 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Dec 22 13:04:28.301: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4908,SelfLink:/apis/apps/v1/namespaces/deployment-4908/replicasets/test-recreate-deployment-5c8c9cc69d,UID:8dc6fca4-554b-4fb4-a0bd-f59a0eb31c44,ResourceVersion:17636556,Generation:1,CreationTimestamp:2019-12-22 13:04:27 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3c2c135c-91ed-42f9-a013-c484fff08dcc 0xc001c76017 0xc001c76018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 22 13:04:28.301: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 22 13:04:28.301: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4908,SelfLink:/apis/apps/v1/namespaces/deployment-4908/replicasets/test-recreate-deployment-6df85df6b9,UID:b968b077-2eb1-4e0c-91e4-d3c0c3ebee01,ResourceVersion:17636545,Generation:2,CreationTimestamp:2019-12-22 13:04:15 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3c2c135c-91ed-42f9-a013-c484fff08dcc 0xc001c761a7 0xc001c761a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 22 13:04:28.551: INFO: Pod "test-recreate-deployment-5c8c9cc69d-lx9px" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-lx9px,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4908,SelfLink:/api/v1/namespaces/deployment-4908/pods/test-recreate-deployment-5c8c9cc69d-lx9px,UID:54378101-80b5-4033-a29a-2daa5f524c25,ResourceVersion:17636557,Generation:0,CreationTimestamp:2019-12-22 13:04:27 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 8dc6fca4-554b-4fb4-a0bd-f59a0eb31c44 0xc001c76d07 0xc001c76d08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5qbnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5qbnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5qbnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c76d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c76da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-22 13:04:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:04:28.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4908" for this suite. Dec 22 13:04:36.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:04:36.732: INFO: namespace deployment-4908 deletion completed in 8.159579844s • [SLOW TEST:21.554 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:04:36.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 22 13:04:36.950: INFO: Pod name rollover-pod: Found 0 pods out of 1 Dec 22 13:04:41.960: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 22 13:04:47.972: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Dec 22 13:04:50.069: INFO: Creating deployment "test-rollover-deployment" Dec 22 13:04:50.163: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Dec 22 13:04:52.244: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Dec 22 13:04:52.256: INFO: Ensure that both replica sets have 1 created replica Dec 22 13:04:52.265: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Dec 22 13:04:52.283: INFO: Updating deployment test-rollover-deployment Dec 22 13:04:52.283: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Dec 22 13:04:54.301: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Dec 22 13:04:54.308: INFO: Make sure deployment "test-rollover-deployment" is complete Dec 22 13:04:54.315: INFO: all replica sets need to contain the pod-template-hash label Dec 22 13:04:54.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:04:56.332: INFO: all replica sets need to contain the pod-template-hash label Dec 22 13:04:56.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:04:58.337: INFO: all replica sets need to contain the pod-template-hash label Dec 22 13:04:58.337: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:05:00.331: INFO: all replica sets need to contain the pod-template-hash label Dec 22 13:05:00.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:05:02.325: INFO: all replica sets need to contain the pod-template-hash label Dec 22 13:05:02.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:05:04.344: INFO: all replica sets need to contain the pod-template-hash label Dec 22 13:05:04.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:05:06.398: INFO: all replica sets need to contain the pod-template-hash label Dec 22 13:05:06.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616706, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:05:08.325: INFO: all replica sets need to contain the pod-template-hash label Dec 22 13:05:08.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616706, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:05:10.330: INFO: all replica sets need to contain the pod-template-hash label Dec 22 13:05:10.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616706, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:05:12.325: INFO: all replica sets need to contain the pod-template-hash label Dec 22 13:05:12.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616706, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:05:14.341: INFO: all replica sets need to contain the pod-template-hash label Dec 22 13:05:14.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616706, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:05:16.410: INFO: Dec 22 13:05:16.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616716, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616690, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 13:05:18.353: INFO: Dec 22 13:05:18.353: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 22 13:05:18.372: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5284,SelfLink:/apis/apps/v1/namespaces/deployment-5284/deployments/test-rollover-deployment,UID:21839bb1-c01b-4758-8d6a-797f4471a679,ResourceVersion:17636718,Generation:2,CreationTimestamp:2019-12-22 13:04:50 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-22 13:04:50 +0000 UTC 2019-12-22 13:04:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-22 13:05:16 +0000 UTC 2019-12-22 13:04:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 22 13:05:18.377: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5284,SelfLink:/apis/apps/v1/namespaces/deployment-5284/replicasets/test-rollover-deployment-854595fc44,UID:559a876b-8d03-4657-a28e-fcba58823508,ResourceVersion:17636706,Generation:2,CreationTimestamp:2019-12-22 13:04:52 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 21839bb1-c01b-4758-8d6a-797f4471a679 0xc002964547 0xc002964548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 22 13:05:18.377: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Dec 22 13:05:18.377: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5284,SelfLink:/apis/apps/v1/namespaces/deployment-5284/replicasets/test-rollover-controller,UID:d9f85c99-753e-49c1-bf80-8c14647e58b5,ResourceVersion:17636717,Generation:2,CreationTimestamp:2019-12-22 13:04:36 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 21839bb1-c01b-4758-8d6a-797f4471a679 0xc00296445f 0xc002964470}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 22 13:05:18.377: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5284,SelfLink:/apis/apps/v1/namespaces/deployment-5284/replicasets/test-rollover-deployment-9b8b997cf,UID:8119d1e5-05c0-4ff9-8dc9-57a2d9a9aafe,ResourceVersion:17636665,Generation:2,CreationTimestamp:2019-12-22 13:04:50 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 21839bb1-c01b-4758-8d6a-797f4471a679 0xc002964610 0xc002964611}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 22 13:05:18.382: INFO: Pod "test-rollover-deployment-854595fc44-mlsqb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-mlsqb,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5284,SelfLink:/api/v1/namespaces/deployment-5284/pods/test-rollover-deployment-854595fc44-mlsqb,UID:acbf4f92-745e-4e9a-b86d-46bb9f05e314,ResourceVersion:17636691,Generation:0,CreationTimestamp:2019-12-22 13:04:52 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 559a876b-8d03-4657-a28e-fcba58823508 0xc002965247 0xc002965248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pgbjq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pgbjq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-pgbjq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029652c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029652e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:05:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:05:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:04:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-22 13:04:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-22 13:05:04 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e1493aa9b3f20b7420a446cce6289a60a1c14caf4c33715b5e2a997d28756114}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:05:18.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5284" for this suite. Dec 22 13:05:26.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:05:26.671: INFO: namespace deployment-5284 deletion completed in 8.28439563s • [SLOW TEST:49.938 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:05:26.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:05:26.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4130" for this suite. Dec 22 13:05:32.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:05:33.083: INFO: namespace kubelet-test-4130 deletion completed in 6.151382028s • [SLOW TEST:6.412 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:05:33.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:05:45.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4096" for this suite. Dec 22 13:05:51.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:05:51.518: INFO: namespace emptydir-wrapper-4096 deletion completed in 6.171275492s • [SLOW TEST:18.434 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:05:51.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3827.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3827.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 22 13:06:09.660: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03: the server could not find the requested resource (get pods dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03) Dec 22 13:06:09.666: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03: the server could not find the requested resource (get pods dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03) Dec 22 13:06:09.670: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03: the server could not find the requested resource (get pods dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03) Dec 22 13:06:09.675: INFO: Unable to read jessie_udp@PodARecord from pod dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03: the server could not find the requested resource (get pods dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03) Dec 22 13:06:09.679: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03: the server could not find the requested resource (get pods dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03) Dec 22 13:06:09.679: INFO: Lookups using dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03 failed for: [wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 22 13:06:14.770: INFO: DNS probes using dns-3827/dns-test-d569ad27-e719-4e03-a183-0a1deb9a8b03 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:06:14.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3827" for this suite. Dec 22 13:06:21.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:06:21.175: INFO: namespace dns-3827 deletion completed in 6.221331924s • [SLOW TEST:29.656 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:06:21.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 22 13:06:21.412: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98" in namespace "projected-2753" to be "success or failure" Dec 22 13:06:21.446: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 33.920173ms Dec 22 13:06:23.454: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04212416s Dec 22 13:06:25.467: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055391637s Dec 22 13:06:27.475: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063395145s Dec 22 13:06:29.484: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072457748s Dec 22 13:06:31.490: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07826614s Dec 22 13:06:33.500: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.088823648s STEP: Saw pod success Dec 22 13:06:33.501: INFO: Pod "downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98" satisfied condition "success or failure" Dec 22 13:06:33.505: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98 container client-container: STEP: delete the pod Dec 22 13:06:33.640: INFO: Waiting for pod downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98 to disappear Dec 22 13:06:33.645: INFO: Pod downwardapi-volume-8a568373-14d0-42af-a5c0-702f4af47e98 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:06:33.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2753" for this suite. Dec 22 13:06:39.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:06:39.853: INFO: namespace projected-2753 deletion completed in 6.191020239s • [SLOW TEST:18.678 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:06:39.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-e695e822-9cf9-4d2a-923e-c2e065566b2b in namespace container-probe-271 Dec 22 13:06:47.997: INFO: Started pod busybox-e695e822-9cf9-4d2a-923e-c2e065566b2b in namespace container-probe-271 STEP: checking the pod's current state and verifying that restartCount is present Dec 22 13:06:48.000: INFO: Initial restart count of pod busybox-e695e822-9cf9-4d2a-923e-c2e065566b2b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:10:49.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-271" for this suite. Dec 22 13:10:55.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:10:56.032: INFO: namespace container-probe-271 deletion completed in 6.284171866s • [SLOW TEST:256.178 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:10:56.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-26c32882-3a57-4521-a139-4235959664e8 in namespace container-probe-5751 Dec 22 13:11:08.116: INFO: Started pod busybox-26c32882-3a57-4521-a139-4235959664e8 in namespace container-probe-5751 STEP: checking the pod's current state and verifying that restartCount is present Dec 22 13:11:08.121: INFO: Initial restart count of pod busybox-26c32882-3a57-4521-a139-4235959664e8 is 0 Dec 22 13:11:54.344: INFO: Restart count of pod container-probe-5751/busybox-26c32882-3a57-4521-a139-4235959664e8 is now 1 (46.223173408s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:11:54.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5751" for this suite. Dec 22 13:12:00.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:12:00.603: INFO: namespace container-probe-5751 deletion completed in 6.224255204s • [SLOW TEST:64.569 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:12:00.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Dec 22 13:12:00.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3884 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 22 13:12:12.023: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 22 13:12:12.023: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:12:14.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3884" for this suite. Dec 22 13:12:20.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:12:20.163: INFO: namespace kubectl-3884 deletion completed in 6.118592871s • [SLOW TEST:19.559 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:12:20.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 22 13:12:20.330: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6445,SelfLink:/api/v1/namespaces/watch-6445/configmaps/e2e-watch-test-resource-version,UID:4aa876a6-8cae-4d09-aeff-7d3e7f7df99c,ResourceVersion:17637515,Generation:0,CreationTimestamp:2019-12-22 13:12:20 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 22 13:12:20.330: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6445,SelfLink:/api/v1/namespaces/watch-6445/configmaps/e2e-watch-test-resource-version,UID:4aa876a6-8cae-4d09-aeff-7d3e7f7df99c,ResourceVersion:17637516,Generation:0,CreationTimestamp:2019-12-22 13:12:20 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:12:20.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6445" for this suite. Dec 22 13:12:26.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:12:26.504: INFO: namespace watch-6445 deletion completed in 6.167190008s • [SLOW TEST:6.342 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:12:26.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-784ccefb-143b-4b98-8d3a-60f64f12778e STEP: Creating a pod to test consume secrets Dec 22 13:12:26.643: INFO: Waiting up to 5m0s for pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18" in namespace "secrets-6786" to be "success or failure" Dec 22 13:12:26.648: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.713564ms Dec 22 13:12:28.662: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01879535s Dec 22 13:12:30.674: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03129842s Dec 22 13:12:32.680: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037118764s Dec 22 13:12:34.690: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047293144s Dec 22 13:12:36.698: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055258743s STEP: Saw pod success Dec 22 13:12:36.698: INFO: Pod "pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18" satisfied condition "success or failure" Dec 22 13:12:36.701: INFO: Trying to get logs from node iruya-node pod pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18 container secret-volume-test: STEP: delete the pod Dec 22 13:12:36.822: INFO: Waiting for pod pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18 to disappear Dec 22 13:12:36.830: INFO: Pod pod-secrets-10e26b29-072f-45cf-a7cf-e8da63b3fd18 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:12:36.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6786" for this suite. Dec 22 13:12:42.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:12:43.036: INFO: namespace secrets-6786 deletion completed in 6.199910321s • [SLOW TEST:16.531 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:12:43.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:12:43.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8439" for this suite. Dec 22 13:12:49.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:12:49.280: INFO: namespace services-8439 deletion completed in 6.145911432s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.243 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:12:49.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Dec 22 13:12:49.954: INFO: created pod pod-service-account-defaultsa Dec 22 13:12:49.954: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 22 13:12:49.976: INFO: created pod pod-service-account-mountsa Dec 22 13:12:49.976: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 22 13:12:50.010: INFO: created pod pod-service-account-nomountsa Dec 22 13:12:50.010: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 22 13:12:50.040: INFO: created pod pod-service-account-defaultsa-mountspec Dec 22 13:12:50.040: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 22 13:12:50.151: INFO: created pod pod-service-account-mountsa-mountspec Dec 22 13:12:50.151: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 22 13:12:50.223: INFO: created pod pod-service-account-nomountsa-mountspec Dec 22 13:12:50.223: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 22 13:12:50.333: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 22 13:12:50.333: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 22 13:12:50.374: INFO: created pod pod-service-account-mountsa-nomountspec Dec 22 13:12:50.374: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 22 13:12:50.408: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 22 13:12:50.408: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:12:50.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5114" for this suite. Dec 22 13:13:18.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:13:18.797: INFO: namespace svcaccounts-5114 deletion completed in 28.226131324s • [SLOW TEST:29.517 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:13:18.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 22 13:13:18.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1295' Dec 22 13:13:18.981: INFO: stderr: "" Dec 22 13:13:18.981: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Dec 22 13:13:18.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1295' Dec 22 13:13:26.004: INFO: stderr: "" Dec 22 13:13:26.005: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:13:26.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1295" for this suite. Dec 22 13:13:32.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:13:32.216: INFO: namespace kubectl-1295 deletion completed in 6.201573048s • [SLOW TEST:13.419 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:13:32.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 22 13:13:32.303: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:13:53.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4162" for this suite. Dec 22 13:14:15.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:14:15.514: INFO: namespace init-container-4162 deletion completed in 22.129729549s • [SLOW TEST:43.297 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:14:15.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-16542908-963b-4d08-95e1-80e3336b761c STEP: Creating a pod to test consume configMaps Dec 22 13:14:15.606: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3" in namespace "projected-7110" to be "success or failure" Dec 22 13:14:15.671: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 64.862635ms Dec 22 13:14:17.677: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070840823s Dec 22 13:14:19.714: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107841381s Dec 22 13:14:21.828: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221571794s Dec 22 13:14:23.938: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331296541s Dec 22 13:14:25.947: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.340236757s STEP: Saw pod success Dec 22 13:14:25.947: INFO: Pod "pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3" satisfied condition "success or failure" Dec 22 13:14:25.950: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3 container projected-configmap-volume-test: STEP: delete the pod Dec 22 13:14:26.064: INFO: Waiting for pod pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3 to disappear Dec 22 13:14:26.098: INFO: Pod pod-projected-configmaps-d3e9cfe9-b8a7-4e8c-b764-bfeb45189cd3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:14:26.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7110" for this suite. Dec 22 13:14:32.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:14:32.237: INFO: namespace projected-7110 deletion completed in 6.135438166s • [SLOW TEST:16.723 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:14:32.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-456 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-456 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-456 Dec 22 13:14:32.411: INFO: Found 0 stateful pods, waiting for 1 Dec 22 13:14:42.451: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 22 13:14:42.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 22 13:14:43.340: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 22 13:14:43.340: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 22 13:14:43.340: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 22 13:14:43.349: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 22 13:14:53.355: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 22 13:14:53.356: INFO: Waiting for statefulset status.replicas updated to 0 Dec 22 13:14:53.377: INFO: POD NODE PHASE GRACE CONDITIONS Dec 22 13:14:53.378: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC }] Dec 22 13:14:53.378: INFO: Dec 22 13:14:53.378: INFO: StatefulSet ss has not reached scale 3, at 1 Dec 22 13:14:54.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991404934s Dec 22 13:14:56.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.402015312s Dec 22 13:14:57.703: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.675790972s Dec 22 13:14:58.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.664945956s Dec 22 13:15:00.334: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.640685829s Dec 22 13:15:01.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.034786249s Dec 22 13:15:02.859: INFO: Verifying statefulset ss doesn't scale past 3 for another 528.214164ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-456 Dec 22 13:15:03.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:15:04.844: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 22 13:15:04.844: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 22 13:15:04.844: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 22 13:15:04.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:15:05.513: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 22 13:15:05.513: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 22 13:15:05.513: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 22 13:15:05.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:15:05.899: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 22 13:15:05.899: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 22 13:15:05.899: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 22 13:15:05.906: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 22 13:15:05.906: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false Dec 22 13:15:15.917: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 22 13:15:15.917: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 22 13:15:15.917: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 22 13:15:15.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 22 13:15:16.449: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 22 13:15:16.449: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 22 13:15:16.449: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 22 13:15:16.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 22 13:15:16.812: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 22 13:15:16.812: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 22 13:15:16.812: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 22 13:15:16.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 22 13:15:17.431: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 22 13:15:17.431: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 22 13:15:17.431: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 22 13:15:17.431: INFO: Waiting for statefulset status.replicas updated to 0 Dec 22 13:15:17.447: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 22 13:15:27.466: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 22 13:15:27.466: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 22 13:15:27.466: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 22 13:15:27.516: INFO: POD NODE PHASE GRACE CONDITIONS Dec 22 13:15:27.516: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC }] Dec 22 13:15:27.516: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:27.516: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:27.516: INFO: Dec 22 13:15:27.516: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 22 13:15:29.807: INFO: POD NODE PHASE GRACE CONDITIONS Dec 22 13:15:29.807: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC }] Dec 22 13:15:29.807: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:29.807: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:29.807: INFO: Dec 22 13:15:29.808: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 22 13:15:30.822: INFO: POD NODE PHASE GRACE CONDITIONS Dec 22 13:15:30.822: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC }] Dec 22 13:15:30.823: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:30.823: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:30.823: INFO: Dec 22 13:15:30.823: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 22 13:15:31.833: INFO: POD NODE PHASE GRACE CONDITIONS Dec 22 13:15:31.833: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC }] Dec 22 13:15:31.833: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:31.833: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:31.833: INFO: Dec 22 13:15:31.833: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 22 13:15:33.364: INFO: POD NODE PHASE GRACE CONDITIONS Dec 22 13:15:33.364: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC }] Dec 22 13:15:33.365: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:33.365: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:33.365: INFO: Dec 22 13:15:33.365: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 22 13:15:34.374: INFO: POD NODE PHASE GRACE CONDITIONS Dec 22 13:15:34.375: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC }] Dec 22 13:15:34.375: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:34.375: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:34.375: INFO: Dec 22 13:15:34.375: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 22 13:15:35.386: INFO: POD NODE PHASE GRACE CONDITIONS Dec 22 13:15:35.386: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC }] Dec 22 13:15:35.387: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:35.387: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:35.387: INFO: Dec 22 13:15:35.387: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 22 13:15:36.433: INFO: POD NODE PHASE GRACE CONDITIONS Dec 22 13:15:36.434: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC }] Dec 22 13:15:36.434: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:36.434: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:36.434: INFO: Dec 22 13:15:36.434: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 22 13:15:37.443: INFO: POD NODE PHASE GRACE CONDITIONS Dec 22 13:15:37.443: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:32 +0000 UTC }] Dec 22 13:15:37.443: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:37.443: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:14:53 +0000 UTC }] Dec 22 13:15:37.443: INFO: Dec 22 13:15:37.443: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-456 Dec 22 13:15:38.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:15:38.664: INFO: rc: 1 Dec 22 13:15:38.664: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001c34780 exit status 1 true [0xc000709c40 0xc000709ce0 0xc000709d48] [0xc000709c40 0xc000709ce0 0xc000709d48] [0xc000709cd8 0xc000709d18] [0xba6c50 0xba6c50] 0xc001f66540 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 22 13:15:48.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:15:48.804: INFO: rc: 1 Dec 22 13:15:48.804: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024a4cc0 exit status 1 true [0xc00299c980 0xc00299c998 0xc00299c9b0] [0xc00299c980 0xc00299c998 0xc00299c9b0] [0xc00299c990 0xc00299c9a8] [0xba6c50 0xba6c50] 0xc001f35680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:15:58.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:15:59.003: INFO: rc: 1 Dec 22 13:15:59.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024a4db0 exit status 1 true [0xc00299c9b8 0xc00299c9d0 0xc00299c9e8] [0xc00299c9b8 0xc00299c9d0 0xc00299c9e8] [0xc00299c9c8 0xc00299c9e0] [0xba6c50 0xba6c50] 0xc001c8f3e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:16:09.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:16:09.106: INFO: rc: 1 Dec 22 13:16:09.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ed0990 exit status 1 true [0xc0023642f8 0xc002364318 0xc002364330] [0xc0023642f8 0xc002364318 0xc002364330] [0xc002364310 0xc002364328] [0xba6c50 0xba6c50] 0xc0031e95c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:16:19.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:16:19.233: INFO: rc: 1 Dec 22 13:16:19.233: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ed0a80 exit status 1 true [0xc002364338 0xc002364350 0xc002364368] [0xc002364338 0xc002364350 0xc002364368] [0xc002364348 0xc002364360] [0xba6c50 0xba6c50] 0xc0031e9f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:16:29.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:16:29.409: INFO: rc: 1 Dec 22 13:16:29.410: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031ce090 exit status 1 true [0xc001e16068 0xc001e160d8 0xc001e16158] [0xc001e16068 0xc001e160d8 0xc001e16158] [0xc001e160d0 0xc001e16110] [0xba6c50 0xba6c50] 0xc001f35680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:16:39.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:16:39.618: INFO: rc: 1 Dec 22 13:16:39.618: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00179c090 exit status 1 true [0xc002364000 0xc002364018 0xc002364030] [0xc002364000 0xc002364018 0xc002364030] [0xc002364010 0xc002364028] [0xba6c50 0xba6c50] 0xc0022d8d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:16:49.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:16:49.740: INFO: rc: 1 Dec 22 13:16:49.740: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00179c180 exit status 1 true [0xc002364038 0xc002364050 0xc002364068] [0xc002364038 0xc002364050 0xc002364068] [0xc002364048 0xc002364060] [0xba6c50 0xba6c50] 0xc0022d98c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:16:59.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:16:59.934: INFO: rc: 1 Dec 22 13:16:59.934: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031ce150 exit status 1 true [0xc001e16188 0xc001e161a8 0xc001e16298] [0xc001e16188 0xc001e161a8 0xc001e16298] [0xc001e161a0 0xc001e16250] [0xba6c50 0xba6c50] 0xc0021fa840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:17:09.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:17:10.158: INFO: rc: 1 Dec 22 13:17:10.158: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00179c2a0 exit status 1 true [0xc002364070 0xc002364088 0xc0023640a0] [0xc002364070 0xc002364088 0xc0023640a0] [0xc002364080 0xc002364098] [0xba6c50 0xba6c50] 0xc0031e8180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:17:20.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:17:20.315: INFO: rc: 1 Dec 22 13:17:20.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ee0090 exit status 1 true [0xc00299c000 0xc00299c018 0xc00299c030] [0xc00299c000 0xc00299c018 0xc00299c030] [0xc00299c010 0xc00299c028] [0xba6c50 0xba6c50] 0xc002e14420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:17:30.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:17:30.511: INFO: rc: 1 Dec 22 13:17:30.512: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00179c390 exit status 1 true [0xc0023640a8 0xc0023640c0 0xc0023640d8] [0xc0023640a8 0xc0023640c0 0xc0023640d8] [0xc0023640b8 0xc0023640d0] [0xba6c50 0xba6c50] 0xc0031e8660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:17:40.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:17:40.649: INFO: rc: 1 Dec 22 13:17:40.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ee0180 exit status 1 true [0xc00299c038 0xc00299c050 0xc00299c068] [0xc00299c038 0xc00299c050 0xc00299c068] [0xc00299c048 0xc00299c060] [0xba6c50 0xba6c50] 0xc002e149c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:17:50.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:17:50.807: INFO: rc: 1 Dec 22 13:17:50.807: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ee0240 exit status 1 true [0xc00299c070 0xc00299c088 0xc00299c0a0] [0xc00299c070 0xc00299c088 0xc00299c0a0] [0xc00299c080 0xc00299c098] [0xba6c50 0xba6c50] 0xc002e14f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:18:00.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:18:00.951: INFO: rc: 1 Dec 22 13:18:00.951: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031ce210 exit status 1 true [0xc001e162f8 0xc001e16340 0xc001e16388] [0xc001e162f8 0xc001e16340 0xc001e16388] [0xc001e16328 0xc001e16368] [0xba6c50 0xba6c50] 0xc0021fb500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:18:10.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:18:11.036: INFO: rc: 1 Dec 22 13:18:11.036: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031ce300 exit status 1 true [0xc001e163c8 0xc001e16440 0xc001e164c8] [0xc001e163c8 0xc001e16440 0xc001e164c8] [0xc001e16408 0xc001e164b0] [0xba6c50 0xba6c50] 0xc00246e900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:18:21.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:18:21.170: INFO: rc: 1 Dec 22 13:18:21.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00104c0c0 exit status 1 true [0xc0031da008 0xc0031da020 0xc0031da038] [0xc0031da008 0xc0031da020 0xc0031da038] [0xc0031da018 0xc0031da030] [0xba6c50 0xba6c50] 0xc002ebe1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:18:31.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:18:31.593: INFO: rc: 1 Dec 22 13:18:31.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00237a090 exit status 1 true [0xc0031da040 0xc0031da058 0xc0031da070] [0xc0031da040 0xc0031da058 0xc0031da070] [0xc0031da050 0xc0031da068] [0xba6c50 0xba6c50] 0xc0021fa840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:18:41.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:18:41.725: INFO: rc: 1 Dec 22 13:18:41.725: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00237a150 exit status 1 true [0xc0031da078 0xc0031da090 0xc0031da0a8] [0xc0031da078 0xc0031da090 0xc0031da0a8] [0xc0031da088 0xc0031da0a0] [0xba6c50 0xba6c50] 0xc0021fb500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:18:51.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:18:51.897: INFO: rc: 1 Dec 22 13:18:51.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00237a240 exit status 1 true [0xc0031da0b0 0xc0031da0c8 0xc0031da0e0] [0xc0031da0b0 0xc0031da0c8 0xc0031da0e0] [0xc0031da0c0 0xc0031da0d8] [0xba6c50 0xba6c50] 0xc0022d86c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:19:01.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:19:02.060: INFO: rc: 1 Dec 22 13:19:02.060: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00179c0c0 exit status 1 true [0xc001e16038 0xc001e160d0 0xc001e16110] [0xc001e16038 0xc001e160d0 0xc001e16110] [0xc001e16090 0xc001e160e8] [0xba6c50 0xba6c50] 0xc001f35680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:19:12.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:19:12.208: INFO: rc: 1 Dec 22 13:19:12.208: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00179c1e0 exit status 1 true [0xc001e16158 0xc001e161a0 0xc001e16250] [0xc001e16158 0xc001e161a0 0xc001e16250] [0xc001e16198 0xc001e16218] [0xba6c50 0xba6c50] 0xc002ebe4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:19:22.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:19:22.326: INFO: rc: 1 Dec 22 13:19:22.326: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00179c2d0 exit status 1 true [0xc001e16298 0xc001e16328 0xc001e16368] [0xc001e16298 0xc001e16328 0xc001e16368] [0xc001e16308 0xc001e16358] [0xba6c50 0xba6c50] 0xc002ebe840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:19:32.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:19:32.403: INFO: rc: 1 Dec 22 13:19:32.403: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00179c3f0 exit status 1 true [0xc001e16388 0xc001e16408 0xc001e164b0] [0xc001e16388 0xc001e16408 0xc001e164b0] [0xc001e16400 0xc001e16490] [0xba6c50 0xba6c50] 0xc002ebfbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:19:42.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:19:42.655: INFO: rc: 1 Dec 22 13:19:42.655: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031ce180 exit status 1 true [0xc002364000 0xc002364018 0xc002364030] [0xc002364000 0xc002364018 0xc002364030] [0xc002364010 0xc002364028] [0xba6c50 0xba6c50] 0xc00246eba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:19:52.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:19:52.802: INFO: rc: 1 Dec 22 13:19:52.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031ce270 exit status 1 true [0xc002364038 0xc002364050 0xc002364068] [0xc002364038 0xc002364050 0xc002364068] [0xc002364048 0xc002364060] [0xba6c50 0xba6c50] 0xc00246f4a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:20:02.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:20:02.935: INFO: rc: 1 Dec 22 13:20:02.935: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00237a360 exit status 1 true [0xc0031da0e8 0xc0031da100 0xc0031da118] [0xc0031da0e8 0xc0031da100 0xc0031da118] [0xc0031da0f8 0xc0031da110] [0xba6c50 0xba6c50] 0xc0022d9380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:20:12.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:20:13.090: INFO: rc: 1 Dec 22 13:20:13.090: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031ce390 exit status 1 true [0xc002364070 0xc002364088 0xc0023640a0] [0xc002364070 0xc002364088 0xc0023640a0] [0xc002364080 0xc002364098] [0xba6c50 0xba6c50] 0xc00246fce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:20:23.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:20:23.244: INFO: rc: 1 Dec 22 13:20:23.244: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00104c090 exit status 1 true [0xc002364008 0xc002364020 0xc002364038] [0xc002364008 0xc002364020 0xc002364038] [0xc002364018 0xc002364030] [0xba6c50 0xba6c50] 0xc001f35680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:20:33.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:20:33.354: INFO: rc: 1 Dec 22 13:20:33.354: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031ce090 exit status 1 true [0xc001e16038 0xc001e160d0 0xc001e16110] [0xc001e16038 0xc001e160d0 0xc001e16110] [0xc001e16090 0xc001e160e8] [0xba6c50 0xba6c50] 0xc0021faa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 22 13:20:43.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-456 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 13:20:43.525: INFO: rc: 1 Dec 22 13:20:43.526: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Dec 22 13:20:43.526: INFO: Scaling statefulset ss to 0 Dec 22 13:20:43.550: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 22 13:20:43.554: INFO: Deleting all statefulset in ns statefulset-456 Dec 22 13:20:43.557: INFO: Scaling statefulset ss to 0 Dec 22 13:20:43.566: INFO: Waiting for statefulset status.replicas updated to 0 Dec 22 13:20:43.569: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:20:43.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-456" for this suite. Dec 22 13:20:49.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:20:49.785: INFO: namespace statefulset-456 deletion completed in 6.166441309s • [SLOW TEST:377.547 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:20:49.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Dec 22 13:20:49.858: INFO: Waiting up to 5m0s for pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff" in namespace "containers-7634" to be "success or failure" Dec 22 13:20:49.882: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 23.788979ms Dec 22 13:20:51.891: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03274534s Dec 22 13:20:53.901: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042777135s Dec 22 13:20:55.907: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049600131s Dec 22 13:20:57.917: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058768071s Dec 22 13:20:59.926: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068472988s STEP: Saw pod success Dec 22 13:20:59.926: INFO: Pod "client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff" satisfied condition "success or failure" Dec 22 13:20:59.931: INFO: Trying to get logs from node iruya-node pod client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff container test-container: STEP: delete the pod Dec 22 13:21:00.034: INFO: Waiting for pod client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff to disappear Dec 22 13:21:00.041: INFO: Pod client-containers-c8dd9aa5-31fe-475f-9657-3409fa7fe3ff no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:21:00.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7634" for this suite. Dec 22 13:21:06.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:21:06.161: INFO: namespace containers-7634 deletion completed in 6.115935308s • [SLOW TEST:16.376 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:21:06.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-690d7fb7-623b-4042-adb6-ca60e181a8c5 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-690d7fb7-623b-4042-adb6-ca60e181a8c5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:21:20.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-30" for this suite. Dec 22 13:21:36.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:21:36.702: INFO: namespace projected-30 deletion completed in 16.161262005s • [SLOW TEST:30.541 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:21:36.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-c9a4a4bb-586f-43ac-aa3e-fd4669f4ba15 STEP: Creating a pod to test consume secrets Dec 22 13:21:36.780: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979" in namespace "projected-4160" to be "success or failure" Dec 22 13:21:36.857: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979": Phase="Pending", Reason="", readiness=false. Elapsed: 77.189059ms Dec 22 13:21:38.864: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084405939s Dec 22 13:21:40.881: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101194867s Dec 22 13:21:42.889: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10952102s Dec 22 13:21:44.902: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121949357s STEP: Saw pod success Dec 22 13:21:44.902: INFO: Pod "pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979" satisfied condition "success or failure" Dec 22 13:21:44.908: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979 container secret-volume-test: STEP: delete the pod Dec 22 13:21:45.179: INFO: Waiting for pod pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979 to disappear Dec 22 13:21:45.188: INFO: Pod pod-projected-secrets-e3ded85a-f884-4f71-85cc-43f05c7c2979 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:21:45.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4160" for this suite. Dec 22 13:21:51.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:21:51.437: INFO: namespace projected-4160 deletion completed in 6.228578498s • [SLOW TEST:14.733 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:21:51.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-02f3fc93-70f7-44f9-be1d-93d35a37a31d STEP: Creating a pod to test consume secrets Dec 22 13:21:51.586: INFO: Waiting up to 5m0s for pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658" in namespace "secrets-5835" to be "success or failure" Dec 22 13:21:51.590: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468067ms Dec 22 13:21:53.606: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020575791s Dec 22 13:21:55.613: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026764303s Dec 22 13:21:57.619: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033524644s Dec 22 13:21:59.628: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042148906s STEP: Saw pod success Dec 22 13:21:59.628: INFO: Pod "pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658" satisfied condition "success or failure" Dec 22 13:21:59.636: INFO: Trying to get logs from node iruya-node pod pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658 container secret-volume-test: STEP: delete the pod Dec 22 13:21:59.731: INFO: Waiting for pod pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658 to disappear Dec 22 13:21:59.741: INFO: Pod pod-secrets-3234dcb6-6685-47fa-94e5-5f6880a9e658 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:21:59.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5835" for this suite. Dec 22 13:22:05.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:22:05.962: INFO: namespace secrets-5835 deletion completed in 6.204264721s • [SLOW TEST:14.524 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:22:05.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 22 13:22:06.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720" in namespace "projected-8497" to be "success or failure" Dec 22 13:22:06.094: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.873279ms Dec 22 13:22:08.109: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021984328s Dec 22 13:22:10.117: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030544098s Dec 22 13:22:12.133: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046310949s Dec 22 13:22:14.148: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061765999s STEP: Saw pod success Dec 22 13:22:14.149: INFO: Pod "downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720" satisfied condition "success or failure" Dec 22 13:22:14.153: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720 container client-container: STEP: delete the pod Dec 22 13:22:14.242: INFO: Waiting for pod downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720 to disappear Dec 22 13:22:14.292: INFO: Pod downwardapi-volume-bcedcbc3-0d73-42bf-9433-89a5fe7a3720 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:22:14.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8497" for this suite. Dec 22 13:22:20.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:22:20.463: INFO: namespace projected-8497 deletion completed in 6.16218438s • [SLOW TEST:14.501 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:22:20.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1222 13:22:25.273786 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 22 13:22:25.274: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:22:25.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7094" for this suite. Dec 22 13:22:31.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:22:31.562: INFO: namespace gc-7094 deletion completed in 6.274024912s • [SLOW TEST:11.099 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:22:31.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 22 13:22:40.255: INFO: Successfully updated pod "pod-update-2097fc5a-a618-4a4f-950f-3b32d5aa1047" STEP: verifying the updated pod is in kubernetes Dec 22 13:22:40.276: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:22:40.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7039" for this suite. Dec 22 13:23:02.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:23:02.413: INFO: namespace pods-7039 deletion completed in 22.128985788s • [SLOW TEST:30.850 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:23:02.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 22 13:23:02.617: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b" in namespace "projected-9887" to be "success or failure" Dec 22 13:23:02.648: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.800961ms Dec 22 13:23:04.663: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046182263s Dec 22 13:23:06.673: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056339018s Dec 22 13:23:08.685: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068420835s Dec 22 13:23:10.694: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077425616s STEP: Saw pod success Dec 22 13:23:10.694: INFO: Pod "downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b" satisfied condition "success or failure" Dec 22 13:23:10.699: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b container client-container: STEP: delete the pod Dec 22 13:23:10.816: INFO: Waiting for pod downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b to disappear Dec 22 13:23:10.828: INFO: Pod downwardapi-volume-3a5bb312-ddc2-48ce-9e23-b6f6baee388b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:23:10.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9887" for this suite. Dec 22 13:23:16.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:23:17.094: INFO: namespace projected-9887 deletion completed in 6.261329368s • [SLOW TEST:14.681 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:23:17.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c Dec 22 13:23:17.255: INFO: Pod name my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c: Found 0 pods out of 1 Dec 22 13:23:22.261: INFO: Pod name my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c: Found 1 pods out of 1 Dec 22 13:23:22.261: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c" are running Dec 22 13:23:26.274: INFO: Pod "my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c-4kw89" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 13:23:17 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 13:23:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 13:23:17 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 13:23:17 +0000 UTC Reason: Message:}]) Dec 22 13:23:26.275: INFO: Trying to dial the pod Dec 22 13:23:31.330: INFO: Controller my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c: Got expected result from replica 1 [my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c-4kw89]: "my-hostname-basic-c9c0d8e7-ac64-4bca-b0a8-9f5e7631bd4c-4kw89", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:23:31.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1422" for this suite. Dec 22 13:23:37.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:23:37.496: INFO: namespace replication-controller-1422 deletion completed in 6.156983169s • [SLOW TEST:20.401 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:23:37.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-5be31e07-ec7c-4d7a-9bc5-a3deacf3f5b5 STEP: Creating a pod to test consume configMaps Dec 22 13:23:38.041: INFO: Waiting up to 5m0s for pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6" in namespace "configmap-2742" to be "success or failure" Dec 22 13:23:38.056: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.936642ms Dec 22 13:23:40.063: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021645556s Dec 22 13:23:42.071: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029834291s Dec 22 13:23:44.089: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048442879s Dec 22 13:23:46.097: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056138302s Dec 22 13:23:48.105: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063640305s STEP: Saw pod success Dec 22 13:23:48.105: INFO: Pod "pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6" satisfied condition "success or failure" Dec 22 13:23:48.110: INFO: Trying to get logs from node iruya-node pod pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6 container configmap-volume-test: STEP: delete the pod Dec 22 13:23:48.455: INFO: Waiting for pod pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6 to disappear Dec 22 13:23:48.470: INFO: Pod pod-configmaps-307a390f-61a4-482d-8cb5-bdb1c40bcec6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:23:48.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2742" for this suite. Dec 22 13:23:54.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:23:54.711: INFO: namespace configmap-2742 deletion completed in 6.231488049s • [SLOW TEST:17.216 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:23:54.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-5320 I1222 13:23:54.792056 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5320, replica count: 1 I1222 13:23:55.842829 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1222 13:23:56.843162 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1222 13:23:57.843596 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1222 13:23:58.843857 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1222 13:23:59.844163 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1222 13:24:00.844533 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1222 13:24:01.844876 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1222 13:24:02.845162 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 22 13:24:03.062: INFO: Created: latency-svc-k2zdk Dec 22 13:24:03.076: INFO: Got endpoints: latency-svc-k2zdk [130.614588ms] Dec 22 13:24:03.238: INFO: Created: latency-svc-vblvk Dec 22 13:24:03.257: INFO: Got endpoints: latency-svc-vblvk [180.45112ms] Dec 22 13:24:03.310: INFO: Created: latency-svc-fkq8n Dec 22 13:24:03.310: INFO: Got endpoints: latency-svc-fkq8n [232.847084ms] Dec 22 13:24:03.436: INFO: Created: latency-svc-5z28t Dec 22 13:24:03.445: INFO: Got endpoints: latency-svc-5z28t [367.232495ms] Dec 22 13:24:03.491: INFO: Created: latency-svc-6vpkc Dec 22 13:24:03.504: INFO: Got endpoints: latency-svc-6vpkc [426.39917ms] Dec 22 13:24:03.621: INFO: Created: latency-svc-q58kd Dec 22 13:24:03.643: INFO: Got endpoints: latency-svc-q58kd [565.746448ms] Dec 22 13:24:03.845: INFO: Created: latency-svc-4v4gx Dec 22 13:24:03.862: INFO: Got endpoints: latency-svc-4v4gx [783.949542ms] Dec 22 13:24:03.934: INFO: Created: latency-svc-cr2rx Dec 22 13:24:04.031: INFO: Got endpoints: latency-svc-cr2rx [953.546087ms] Dec 22 13:24:04.079: INFO: Created: latency-svc-ps6sw Dec 22 13:24:04.112: INFO: Got endpoints: latency-svc-ps6sw [1.034387633s] Dec 22 13:24:04.295: INFO: Created: latency-svc-bkj7g Dec 22 13:24:04.295: INFO: Got endpoints: latency-svc-bkj7g [1.217527412s] Dec 22 13:24:04.356: INFO: Created: latency-svc-npjhc Dec 22 13:24:04.368: INFO: Got endpoints: latency-svc-npjhc [1.289727861s] Dec 22 13:24:04.493: INFO: Created: latency-svc-2lzm2 Dec 22 13:24:04.501: INFO: Got endpoints: latency-svc-2lzm2 [1.42296604s] Dec 22 13:24:04.638: INFO: Created: latency-svc-6gs46 Dec 22 13:24:04.650: INFO: Got endpoints: latency-svc-6gs46 [1.572315558s] Dec 22 13:24:04.816: INFO: Created: latency-svc-94l9s Dec 22 13:24:04.824: INFO: Got endpoints: latency-svc-94l9s [1.746095282s] Dec 22 13:24:05.018: INFO: Created: latency-svc-wzgwr Dec 22 13:24:05.025: INFO: Got endpoints: latency-svc-wzgwr [1.946713443s] Dec 22 13:24:05.092: INFO: Created: latency-svc-mnl46 Dec 22 13:24:05.315: INFO: Got endpoints: latency-svc-mnl46 [2.237437141s] Dec 22 13:24:05.320: INFO: Created: latency-svc-st2sx Dec 22 13:24:05.332: INFO: Got endpoints: latency-svc-st2sx [2.074560345s] Dec 22 13:24:05.412: INFO: Created: latency-svc-ppp9k Dec 22 13:24:05.487: INFO: Got endpoints: latency-svc-ppp9k [2.177088343s] Dec 22 13:24:05.530: INFO: Created: latency-svc-jpmbb Dec 22 13:24:05.546: INFO: Got endpoints: latency-svc-jpmbb [2.10067266s] Dec 22 13:24:05.682: INFO: Created: latency-svc-ghhfs Dec 22 13:24:05.682: INFO: Got endpoints: latency-svc-ghhfs [2.177863286s] Dec 22 13:24:05.720: INFO: Created: latency-svc-4jrzl Dec 22 13:24:05.733: INFO: Got endpoints: latency-svc-4jrzl [2.089607978s] Dec 22 13:24:05.852: INFO: Created: latency-svc-lq9dw Dec 22 13:24:05.880: INFO: Got endpoints: latency-svc-lq9dw [2.018108332s] Dec 22 13:24:05.923: INFO: Created: latency-svc-skgpp Dec 22 13:24:05.929: INFO: Got endpoints: latency-svc-skgpp [1.89710653s] Dec 22 13:24:06.056: INFO: Created: latency-svc-dlj6n Dec 22 13:24:06.065: INFO: Got endpoints: latency-svc-dlj6n [1.952612979s] Dec 22 13:24:06.115: INFO: Created: latency-svc-zs82g Dec 22 13:24:06.199: INFO: Got endpoints: latency-svc-zs82g [1.903833448s] Dec 22 13:24:06.255: INFO: Created: latency-svc-bzrxg Dec 22 13:24:06.277: INFO: Got endpoints: latency-svc-bzrxg [1.90961781s] Dec 22 13:24:06.370: INFO: Created: latency-svc-dcwjn Dec 22 13:24:06.371: INFO: Got endpoints: latency-svc-dcwjn [1.870437152s] Dec 22 13:24:06.446: INFO: Created: latency-svc-jwgrz Dec 22 13:24:06.450: INFO: Got endpoints: latency-svc-jwgrz [1.799736242s] Dec 22 13:24:06.687: INFO: Created: latency-svc-gslhd Dec 22 13:24:06.703: INFO: Got endpoints: latency-svc-gslhd [1.878762036s] Dec 22 13:24:06.788: INFO: Created: latency-svc-8vkh5 Dec 22 13:24:06.817: INFO: Got endpoints: latency-svc-8vkh5 [1.792480258s] Dec 22 13:24:06.864: INFO: Created: latency-svc-wpqwc Dec 22 13:24:06.880: INFO: Got endpoints: latency-svc-wpqwc [1.564823575s] Dec 22 13:24:06.986: INFO: Created: latency-svc-cq2f2 Dec 22 13:24:07.008: INFO: Got endpoints: latency-svc-cq2f2 [1.675666137s] Dec 22 13:24:07.071: INFO: Created: latency-svc-tt5fp Dec 22 13:24:07.077: INFO: Got endpoints: latency-svc-tt5fp [1.589566225s] Dec 22 13:24:07.258: INFO: Created: latency-svc-brbrs Dec 22 13:24:07.300: INFO: Got endpoints: latency-svc-brbrs [1.754653284s] Dec 22 13:24:07.417: INFO: Created: latency-svc-zkcrj Dec 22 13:24:07.418: INFO: Got endpoints: latency-svc-zkcrj [1.735401457s] Dec 22 13:24:07.552: INFO: Created: latency-svc-w6v4t Dec 22 13:24:07.553: INFO: Got endpoints: latency-svc-w6v4t [1.819548303s] Dec 22 13:24:07.639: INFO: Created: latency-svc-cgsg8 Dec 22 13:24:07.639: INFO: Got endpoints: latency-svc-cgsg8 [1.759309535s] Dec 22 13:24:07.719: INFO: Created: latency-svc-qg744 Dec 22 13:24:07.731: INFO: Got endpoints: latency-svc-qg744 [1.801803738s] Dec 22 13:24:07.784: INFO: Created: latency-svc-46g24 Dec 22 13:24:07.846: INFO: Got endpoints: latency-svc-46g24 [1.780544578s] Dec 22 13:24:07.900: INFO: Created: latency-svc-vf2wf Dec 22 13:24:07.905: INFO: Got endpoints: latency-svc-vf2wf [1.705322737s] Dec 22 13:24:08.000: INFO: Created: latency-svc-hp52t Dec 22 13:24:08.010: INFO: Got endpoints: latency-svc-hp52t [1.732426525s] Dec 22 13:24:08.083: INFO: Created: latency-svc-6wc7d Dec 22 13:24:08.168: INFO: Got endpoints: latency-svc-6wc7d [1.796873985s] Dec 22 13:24:08.236: INFO: Created: latency-svc-qwffw Dec 22 13:24:08.238: INFO: Got endpoints: latency-svc-qwffw [1.787458295s] Dec 22 13:24:08.839: INFO: Created: latency-svc-z7gkx Dec 22 13:24:08.843: INFO: Got endpoints: latency-svc-z7gkx [2.139947014s] Dec 22 13:24:08.962: INFO: Created: latency-svc-2x4lf Dec 22 13:24:08.965: INFO: Got endpoints: latency-svc-2x4lf [2.147476175s] Dec 22 13:24:09.015: INFO: Created: latency-svc-4rdnt Dec 22 13:24:09.028: INFO: Got endpoints: latency-svc-4rdnt [2.148113992s] Dec 22 13:24:09.159: INFO: Created: latency-svc-jfnbd Dec 22 13:24:09.188: INFO: Got endpoints: latency-svc-jfnbd [2.180291324s] Dec 22 13:24:09.195: INFO: Created: latency-svc-jmg9r Dec 22 13:24:09.214: INFO: Got endpoints: latency-svc-jmg9r [2.136810934s] Dec 22 13:24:09.305: INFO: Created: latency-svc-p68bs Dec 22 13:24:09.316: INFO: Got endpoints: latency-svc-p68bs [2.015928196s] Dec 22 13:24:09.413: INFO: Created: latency-svc-bb885 Dec 22 13:24:09.666: INFO: Got endpoints: latency-svc-bb885 [2.248450172s] Dec 22 13:24:09.703: INFO: Created: latency-svc-brh2d Dec 22 13:24:09.711: INFO: Got endpoints: latency-svc-brh2d [2.15862688s] Dec 22 13:24:09.860: INFO: Created: latency-svc-jc8b6 Dec 22 13:24:09.918: INFO: Got endpoints: latency-svc-jc8b6 [2.278913481s] Dec 22 13:24:09.921: INFO: Created: latency-svc-xdcjq Dec 22 13:24:09.934: INFO: Got endpoints: latency-svc-xdcjq [2.202945314s] Dec 22 13:24:10.040: INFO: Created: latency-svc-swddp Dec 22 13:24:10.058: INFO: Got endpoints: latency-svc-swddp [2.211718462s] Dec 22 13:24:10.107: INFO: Created: latency-svc-vm6w6 Dec 22 13:24:10.107: INFO: Got endpoints: latency-svc-vm6w6 [2.20258625s] Dec 22 13:24:10.231: INFO: Created: latency-svc-ll5f5 Dec 22 13:24:10.239: INFO: Got endpoints: latency-svc-ll5f5 [2.229262438s] Dec 22 13:24:10.410: INFO: Created: latency-svc-smfgt Dec 22 13:24:10.410: INFO: Got endpoints: latency-svc-smfgt [2.241103367s] Dec 22 13:24:10.479: INFO: Created: latency-svc-brxdt Dec 22 13:24:10.663: INFO: Got endpoints: latency-svc-brxdt [2.425530257s] Dec 22 13:24:10.691: INFO: Created: latency-svc-drt9c Dec 22 13:24:10.694: INFO: Got endpoints: latency-svc-drt9c [1.850756912s] Dec 22 13:24:10.748: INFO: Created: latency-svc-jkgkg Dec 22 13:24:10.751: INFO: Got endpoints: latency-svc-jkgkg [1.78626041s] Dec 22 13:24:10.862: INFO: Created: latency-svc-wbl4l Dec 22 13:24:10.877: INFO: Got endpoints: latency-svc-wbl4l [1.84837421s] Dec 22 13:24:10.911: INFO: Created: latency-svc-cjnvz Dec 22 13:24:10.929: INFO: Got endpoints: latency-svc-cjnvz [1.741052751s] Dec 22 13:24:11.019: INFO: Created: latency-svc-ppx7x Dec 22 13:24:11.024: INFO: Got endpoints: latency-svc-ppx7x [1.810231293s] Dec 22 13:24:11.084: INFO: Created: latency-svc-z5xh6 Dec 22 13:24:11.197: INFO: Got endpoints: latency-svc-z5xh6 [1.880674688s] Dec 22 13:24:11.220: INFO: Created: latency-svc-65547 Dec 22 13:24:11.238: INFO: Got endpoints: latency-svc-65547 [1.571290876s] Dec 22 13:24:11.327: INFO: Created: latency-svc-s7blp Dec 22 13:24:11.327: INFO: Got endpoints: latency-svc-s7blp [1.615746571s] Dec 22 13:24:11.438: INFO: Created: latency-svc-j8nbg Dec 22 13:24:11.458: INFO: Got endpoints: latency-svc-j8nbg [1.539741881s] Dec 22 13:24:11.503: INFO: Created: latency-svc-29s4x Dec 22 13:24:11.574: INFO: Got endpoints: latency-svc-29s4x [1.639964083s] Dec 22 13:24:11.604: INFO: Created: latency-svc-5x8qp Dec 22 13:24:11.604: INFO: Got endpoints: latency-svc-5x8qp [1.546372598s] Dec 22 13:24:11.670: INFO: Created: latency-svc-d6959 Dec 22 13:24:11.676: INFO: Got endpoints: latency-svc-d6959 [1.568794784s] Dec 22 13:24:11.827: INFO: Created: latency-svc-7vm6v Dec 22 13:24:11.827: INFO: Got endpoints: latency-svc-7vm6v [1.588029367s] Dec 22 13:24:11.870: INFO: Created: latency-svc-5p6qp Dec 22 13:24:11.930: INFO: Got endpoints: latency-svc-5p6qp [1.519989027s] Dec 22 13:24:12.036: INFO: Created: latency-svc-vdcv4 Dec 22 13:24:12.090: INFO: Got endpoints: latency-svc-vdcv4 [1.426089626s] Dec 22 13:24:12.128: INFO: Created: latency-svc-nkjpj Dec 22 13:24:12.170: INFO: Got endpoints: latency-svc-nkjpj [1.475557524s] Dec 22 13:24:12.187: INFO: Created: latency-svc-2gpft Dec 22 13:24:12.188: INFO: Got endpoints: latency-svc-2gpft [1.436547692s] Dec 22 13:24:12.337: INFO: Created: latency-svc-4q4lc Dec 22 13:24:12.355: INFO: Got endpoints: latency-svc-4q4lc [1.478050178s] Dec 22 13:24:12.388: INFO: Created: latency-svc-fm9b5 Dec 22 13:24:12.396: INFO: Got endpoints: latency-svc-fm9b5 [1.466011948s] Dec 22 13:24:12.560: INFO: Created: latency-svc-czgvc Dec 22 13:24:12.560: INFO: Got endpoints: latency-svc-czgvc [1.535266761s] Dec 22 13:24:12.630: INFO: Created: latency-svc-5656t Dec 22 13:24:12.686: INFO: Got endpoints: latency-svc-5656t [1.489203392s] Dec 22 13:24:12.748: INFO: Created: latency-svc-2gxcc Dec 22 13:24:12.748: INFO: Got endpoints: latency-svc-2gxcc [1.510605152s] Dec 22 13:24:12.872: INFO: Created: latency-svc-szgx4 Dec 22 13:24:12.884: INFO: Got endpoints: latency-svc-szgx4 [197.87474ms] Dec 22 13:24:12.960: INFO: Created: latency-svc-wzfr5 Dec 22 13:24:13.031: INFO: Got endpoints: latency-svc-wzfr5 [1.703360361s] Dec 22 13:24:13.201: INFO: Created: latency-svc-zd7m4 Dec 22 13:24:13.206: INFO: Got endpoints: latency-svc-zd7m4 [1.747595499s] Dec 22 13:24:13.290: INFO: Created: latency-svc-pprzg Dec 22 13:24:13.346: INFO: Got endpoints: latency-svc-pprzg [1.772229973s] Dec 22 13:24:13.385: INFO: Created: latency-svc-b48r6 Dec 22 13:24:13.397: INFO: Got endpoints: latency-svc-b48r6 [1.792951409s] Dec 22 13:24:13.510: INFO: Created: latency-svc-w76zk Dec 22 13:24:13.520: INFO: Got endpoints: latency-svc-w76zk [1.844010955s] Dec 22 13:24:13.620: INFO: Created: latency-svc-7swh9 Dec 22 13:24:13.715: INFO: Got endpoints: latency-svc-7swh9 [1.887616369s] Dec 22 13:24:13.724: INFO: Created: latency-svc-wtbg2 Dec 22 13:24:13.747: INFO: Got endpoints: latency-svc-wtbg2 [1.817084995s] Dec 22 13:24:13.806: INFO: Created: latency-svc-rsx66 Dec 22 13:24:13.894: INFO: Got endpoints: latency-svc-rsx66 [1.804549533s] Dec 22 13:24:13.934: INFO: Created: latency-svc-hnnz4 Dec 22 13:24:13.976: INFO: Got endpoints: latency-svc-hnnz4 [1.805511899s] Dec 22 13:24:14.116: INFO: Created: latency-svc-xn4gw Dec 22 13:24:14.131: INFO: Got endpoints: latency-svc-xn4gw [1.943260641s] Dec 22 13:24:14.311: INFO: Created: latency-svc-fcgjg Dec 22 13:24:14.346: INFO: Got endpoints: latency-svc-fcgjg [1.990468475s] Dec 22 13:24:14.347: INFO: Created: latency-svc-bvzvj Dec 22 13:24:14.360: INFO: Got endpoints: latency-svc-bvzvj [1.96387866s] Dec 22 13:24:14.399: INFO: Created: latency-svc-7qftd Dec 22 13:24:14.529: INFO: Got endpoints: latency-svc-7qftd [1.969314349s] Dec 22 13:24:14.563: INFO: Created: latency-svc-5798x Dec 22 13:24:14.563: INFO: Got endpoints: latency-svc-5798x [1.814846908s] Dec 22 13:24:14.619: INFO: Created: latency-svc-4knrg Dec 22 13:24:14.753: INFO: Got endpoints: latency-svc-4knrg [1.868158682s] Dec 22 13:24:14.772: INFO: Created: latency-svc-llgfp Dec 22 13:24:14.793: INFO: Got endpoints: latency-svc-llgfp [1.76250716s] Dec 22 13:24:14.831: INFO: Created: latency-svc-t4b7f Dec 22 13:24:14.841: INFO: Got endpoints: latency-svc-t4b7f [1.634879537s] Dec 22 13:24:14.952: INFO: Created: latency-svc-mjjsc Dec 22 13:24:14.962: INFO: Got endpoints: latency-svc-mjjsc [1.615068024s] Dec 22 13:24:14.995: INFO: Created: latency-svc-kvqvm Dec 22 13:24:15.007: INFO: Got endpoints: latency-svc-kvqvm [1.609188043s] Dec 22 13:24:15.125: INFO: Created: latency-svc-brlmg Dec 22 13:24:15.140: INFO: Got endpoints: latency-svc-brlmg [1.619484088s] Dec 22 13:24:15.235: INFO: Created: latency-svc-wcs52 Dec 22 13:24:15.334: INFO: Got endpoints: latency-svc-wcs52 [1.619035014s] Dec 22 13:24:15.381: INFO: Created: latency-svc-7qkkr Dec 22 13:24:15.393: INFO: Got endpoints: latency-svc-7qkkr [1.646289793s] Dec 22 13:24:15.450: INFO: Created: latency-svc-5tm9r Dec 22 13:24:15.577: INFO: Got endpoints: latency-svc-5tm9r [1.682535248s] Dec 22 13:24:15.595: INFO: Created: latency-svc-srw76 Dec 22 13:24:15.612: INFO: Got endpoints: latency-svc-srw76 [1.636699111s] Dec 22 13:24:15.650: INFO: Created: latency-svc-2k747 Dec 22 13:24:15.665: INFO: Got endpoints: latency-svc-2k747 [1.534134171s] Dec 22 13:24:15.808: INFO: Created: latency-svc-hzmt6 Dec 22 13:24:15.808: INFO: Got endpoints: latency-svc-hzmt6 [1.462118445s] Dec 22 13:24:15.867: INFO: Created: latency-svc-8dxph Dec 22 13:24:16.048: INFO: Got endpoints: latency-svc-8dxph [1.687886837s] Dec 22 13:24:16.083: INFO: Created: latency-svc-fhdgv Dec 22 13:24:16.159: INFO: Created: latency-svc-g5cfm Dec 22 13:24:16.168: INFO: Got endpoints: latency-svc-fhdgv [1.638193638s] Dec 22 13:24:16.273: INFO: Got endpoints: latency-svc-g5cfm [1.709854966s] Dec 22 13:24:16.353: INFO: Created: latency-svc-jx6kl Dec 22 13:24:16.672: INFO: Got endpoints: latency-svc-jx6kl [1.919330988s] Dec 22 13:24:16.727: INFO: Created: latency-svc-gdhj9 Dec 22 13:24:16.748: INFO: Got endpoints: latency-svc-gdhj9 [1.954351841s] Dec 22 13:24:16.964: INFO: Created: latency-svc-496c7 Dec 22 13:24:16.976: INFO: Got endpoints: latency-svc-496c7 [2.134816557s] Dec 22 13:24:17.221: INFO: Created: latency-svc-zh9m4 Dec 22 13:24:17.287: INFO: Got endpoints: latency-svc-zh9m4 [2.325407799s] Dec 22 13:24:17.292: INFO: Created: latency-svc-sq9mf Dec 22 13:24:17.317: INFO: Got endpoints: latency-svc-sq9mf [2.310842376s] Dec 22 13:24:17.438: INFO: Created: latency-svc-kx8zv Dec 22 13:24:17.455: INFO: Got endpoints: latency-svc-kx8zv [2.315481249s] Dec 22 13:24:17.715: INFO: Created: latency-svc-fvzbk Dec 22 13:24:17.715: INFO: Got endpoints: latency-svc-fvzbk [2.380256502s] Dec 22 13:24:17.796: INFO: Created: latency-svc-22fhr Dec 22 13:24:17.906: INFO: Got endpoints: latency-svc-22fhr [2.512952899s] Dec 22 13:24:17.936: INFO: Created: latency-svc-lwhjq Dec 22 13:24:17.979: INFO: Got endpoints: latency-svc-lwhjq [2.402099211s] Dec 22 13:24:18.214: INFO: Created: latency-svc-m4rlc Dec 22 13:24:18.221: INFO: Got endpoints: latency-svc-m4rlc [2.608121883s] Dec 22 13:24:18.417: INFO: Created: latency-svc-snq6d Dec 22 13:24:18.428: INFO: Got endpoints: latency-svc-snq6d [2.762765977s] Dec 22 13:24:18.508: INFO: Created: latency-svc-6hntf Dec 22 13:24:18.630: INFO: Got endpoints: latency-svc-6hntf [2.821972773s] Dec 22 13:24:18.710: INFO: Created: latency-svc-68hq9 Dec 22 13:24:18.832: INFO: Got endpoints: latency-svc-68hq9 [2.784576557s] Dec 22 13:24:18.877: INFO: Created: latency-svc-xst6v Dec 22 13:24:18.883: INFO: Got endpoints: latency-svc-xst6v [2.715302909s] Dec 22 13:24:18.939: INFO: Created: latency-svc-hw8nr Dec 22 13:24:19.141: INFO: Got endpoints: latency-svc-hw8nr [2.867250578s] Dec 22 13:24:19.158: INFO: Created: latency-svc-qpjjz Dec 22 13:24:19.166: INFO: Got endpoints: latency-svc-qpjjz [2.494057483s] Dec 22 13:24:19.335: INFO: Created: latency-svc-778cd Dec 22 13:24:19.340: INFO: Got endpoints: latency-svc-778cd [2.592127097s] Dec 22 13:24:19.414: INFO: Created: latency-svc-7km2n Dec 22 13:24:19.421: INFO: Got endpoints: latency-svc-7km2n [2.445489186s] Dec 22 13:24:19.933: INFO: Created: latency-svc-z4plm Dec 22 13:24:19.933: INFO: Got endpoints: latency-svc-z4plm [2.645822798s] Dec 22 13:24:19.995: INFO: Created: latency-svc-tdsl4 Dec 22 13:24:20.079: INFO: Got endpoints: latency-svc-tdsl4 [2.761055301s] Dec 22 13:24:20.113: INFO: Created: latency-svc-m54tr Dec 22 13:24:20.143: INFO: Got endpoints: latency-svc-m54tr [2.687370693s] Dec 22 13:24:20.183: INFO: Created: latency-svc-npt8n Dec 22 13:24:20.374: INFO: Got endpoints: latency-svc-npt8n [2.65927329s] Dec 22 13:24:20.392: INFO: Created: latency-svc-f4cns Dec 22 13:24:20.459: INFO: Got endpoints: latency-svc-f4cns [2.552501531s] Dec 22 13:24:20.459: INFO: Created: latency-svc-kdrst Dec 22 13:24:20.622: INFO: Got endpoints: latency-svc-kdrst [2.642632139s] Dec 22 13:24:20.665: INFO: Created: latency-svc-nrff4 Dec 22 13:24:20.673: INFO: Got endpoints: latency-svc-nrff4 [2.452003733s] Dec 22 13:24:20.911: INFO: Created: latency-svc-zn4jj Dec 22 13:24:20.929: INFO: Got endpoints: latency-svc-zn4jj [2.500917134s] Dec 22 13:24:21.212: INFO: Created: latency-svc-q96nf Dec 22 13:24:21.212: INFO: Got endpoints: latency-svc-q96nf [2.581950492s] Dec 22 13:24:21.278: INFO: Created: latency-svc-gfttn Dec 22 13:24:21.379: INFO: Got endpoints: latency-svc-gfttn [2.54650449s] Dec 22 13:24:21.427: INFO: Created: latency-svc-c8lzz Dec 22 13:24:21.444: INFO: Got endpoints: latency-svc-c8lzz [2.561217606s] Dec 22 13:24:21.619: INFO: Created: latency-svc-v6w8s Dec 22 13:24:21.651: INFO: Got endpoints: latency-svc-v6w8s [2.510219329s] Dec 22 13:24:21.704: INFO: Created: latency-svc-hhlpm Dec 22 13:24:21.710: INFO: Got endpoints: latency-svc-hhlpm [2.543633175s] Dec 22 13:24:21.831: INFO: Created: latency-svc-hjt85 Dec 22 13:24:21.834: INFO: Got endpoints: latency-svc-hjt85 [2.49404818s] Dec 22 13:24:21.997: INFO: Created: latency-svc-nt89j Dec 22 13:24:22.007: INFO: Got endpoints: latency-svc-nt89j [2.585817771s] Dec 22 13:24:22.094: INFO: Created: latency-svc-vv8jb Dec 22 13:24:22.236: INFO: Got endpoints: latency-svc-vv8jb [2.302674335s] Dec 22 13:24:22.293: INFO: Created: latency-svc-qvlh6 Dec 22 13:24:22.394: INFO: Got endpoints: latency-svc-qvlh6 [2.314667335s] Dec 22 13:24:22.472: INFO: Created: latency-svc-bfqqc Dec 22 13:24:22.579: INFO: Got endpoints: latency-svc-bfqqc [2.436094847s] Dec 22 13:24:22.620: INFO: Created: latency-svc-mmdvv Dec 22 13:24:22.626: INFO: Got endpoints: latency-svc-mmdvv [2.251685748s] Dec 22 13:24:22.679: INFO: Created: latency-svc-cmcsg Dec 22 13:24:22.740: INFO: Got endpoints: latency-svc-cmcsg [2.281011907s] Dec 22 13:24:22.807: INFO: Created: latency-svc-fflc8 Dec 22 13:24:22.973: INFO: Got endpoints: latency-svc-fflc8 [2.350461767s] Dec 22 13:24:22.990: INFO: Created: latency-svc-jlj55 Dec 22 13:24:23.005: INFO: Got endpoints: latency-svc-jlj55 [2.331942571s] Dec 22 13:24:23.064: INFO: Created: latency-svc-v6lvb Dec 22 13:24:23.146: INFO: Got endpoints: latency-svc-v6lvb [2.216928659s] Dec 22 13:24:23.198: INFO: Created: latency-svc-9t8pj Dec 22 13:24:23.226: INFO: Got endpoints: latency-svc-9t8pj [2.01342421s] Dec 22 13:24:23.336: INFO: Created: latency-svc-qwht6 Dec 22 13:24:23.549: INFO: Got endpoints: latency-svc-qwht6 [2.170169331s] Dec 22 13:24:23.571: INFO: Created: latency-svc-78s42 Dec 22 13:24:23.572: INFO: Got endpoints: latency-svc-78s42 [2.127006991s] Dec 22 13:24:23.652: INFO: Created: latency-svc-6gb2z Dec 22 13:24:23.788: INFO: Got endpoints: latency-svc-6gb2z [2.137063167s] Dec 22 13:24:23.877: INFO: Created: latency-svc-wxm6r Dec 22 13:24:23.993: INFO: Got endpoints: latency-svc-wxm6r [2.282195517s] Dec 22 13:24:24.028: INFO: Created: latency-svc-x2fsr Dec 22 13:24:24.064: INFO: Got endpoints: latency-svc-x2fsr [2.22961349s] Dec 22 13:24:24.177: INFO: Created: latency-svc-7sbj9 Dec 22 13:24:24.188: INFO: Got endpoints: latency-svc-7sbj9 [2.180093744s] Dec 22 13:24:24.232: INFO: Created: latency-svc-4wr6h Dec 22 13:24:24.246: INFO: Got endpoints: latency-svc-4wr6h [2.009813474s] Dec 22 13:24:24.412: INFO: Created: latency-svc-5gkcf Dec 22 13:24:24.437: INFO: Got endpoints: latency-svc-5gkcf [2.043303011s] Dec 22 13:24:24.446: INFO: Created: latency-svc-tk4sl Dec 22 13:24:24.448: INFO: Got endpoints: latency-svc-tk4sl [1.8688933s] Dec 22 13:24:24.625: INFO: Created: latency-svc-2m2gc Dec 22 13:24:24.640: INFO: Got endpoints: latency-svc-2m2gc [2.01382568s] Dec 22 13:24:24.842: INFO: Created: latency-svc-jb64f Dec 22 13:24:24.970: INFO: Got endpoints: latency-svc-jb64f [2.22959463s] Dec 22 13:24:24.978: INFO: Created: latency-svc-vz9xp Dec 22 13:24:24.984: INFO: Got endpoints: latency-svc-vz9xp [2.010344731s] Dec 22 13:24:25.055: INFO: Created: latency-svc-hlshg Dec 22 13:24:25.160: INFO: Got endpoints: latency-svc-hlshg [2.154706563s] Dec 22 13:24:25.216: INFO: Created: latency-svc-twnrx Dec 22 13:24:25.232: INFO: Got endpoints: latency-svc-twnrx [2.085245336s] Dec 22 13:24:25.350: INFO: Created: latency-svc-bprnl Dec 22 13:24:25.412: INFO: Got endpoints: latency-svc-bprnl [2.185826146s] Dec 22 13:24:25.417: INFO: Created: latency-svc-wxn99 Dec 22 13:24:25.514: INFO: Got endpoints: latency-svc-wxn99 [1.964307648s] Dec 22 13:24:25.580: INFO: Created: latency-svc-hnlx6 Dec 22 13:24:25.587: INFO: Got endpoints: latency-svc-hnlx6 [2.015347487s] Dec 22 13:24:25.747: INFO: Created: latency-svc-qjkpx Dec 22 13:24:25.752: INFO: Got endpoints: latency-svc-qjkpx [1.963715326s] Dec 22 13:24:25.888: INFO: Created: latency-svc-r6zcc Dec 22 13:24:25.891: INFO: Got endpoints: latency-svc-r6zcc [1.89779752s] Dec 22 13:24:25.943: INFO: Created: latency-svc-2p7xn Dec 22 13:24:25.955: INFO: Got endpoints: latency-svc-2p7xn [1.890592873s] Dec 22 13:24:26.130: INFO: Created: latency-svc-bvf8j Dec 22 13:24:26.167: INFO: Got endpoints: latency-svc-bvf8j [1.97943909s] Dec 22 13:24:26.293: INFO: Created: latency-svc-22qcm Dec 22 13:24:26.301: INFO: Got endpoints: latency-svc-22qcm [2.054655255s] Dec 22 13:24:26.331: INFO: Created: latency-svc-6chc9 Dec 22 13:24:26.339: INFO: Got endpoints: latency-svc-6chc9 [1.90115627s] Dec 22 13:24:26.501: INFO: Created: latency-svc-4579v Dec 22 13:24:26.504: INFO: Got endpoints: latency-svc-4579v [2.055796146s] Dec 22 13:24:26.595: INFO: Created: latency-svc-c9ncv Dec 22 13:24:26.699: INFO: Got endpoints: latency-svc-c9ncv [2.059213987s] Dec 22 13:24:26.740: INFO: Created: latency-svc-pgjkx Dec 22 13:24:26.768: INFO: Got endpoints: latency-svc-pgjkx [1.797955379s] Dec 22 13:24:26.788: INFO: Created: latency-svc-5mts6 Dec 22 13:24:26.788: INFO: Got endpoints: latency-svc-5mts6 [1.803464225s] Dec 22 13:24:26.903: INFO: Created: latency-svc-j9jkg Dec 22 13:24:26.911: INFO: Got endpoints: latency-svc-j9jkg [1.751706135s] Dec 22 13:24:26.972: INFO: Created: latency-svc-phn5m Dec 22 13:24:26.989: INFO: Got endpoints: latency-svc-phn5m [1.757175442s] Dec 22 13:24:27.065: INFO: Created: latency-svc-n5xtl Dec 22 13:24:27.115: INFO: Created: latency-svc-8rjm2 Dec 22 13:24:27.115: INFO: Got endpoints: latency-svc-n5xtl [1.703602417s] Dec 22 13:24:27.136: INFO: Got endpoints: latency-svc-8rjm2 [1.622439587s] Dec 22 13:24:27.265: INFO: Created: latency-svc-gncgf Dec 22 13:24:27.273: INFO: Got endpoints: latency-svc-gncgf [1.685775349s] Dec 22 13:24:27.324: INFO: Created: latency-svc-42kxr Dec 22 13:24:27.361: INFO: Got endpoints: latency-svc-42kxr [1.608359072s] Dec 22 13:24:27.435: INFO: Created: latency-svc-lkdvn Dec 22 13:24:27.446: INFO: Got endpoints: latency-svc-lkdvn [1.555641339s] Dec 22 13:24:27.504: INFO: Created: latency-svc-cbdd8 Dec 22 13:24:27.517: INFO: Got endpoints: latency-svc-cbdd8 [1.561801586s] Dec 22 13:24:27.626: INFO: Created: latency-svc-tlpb9 Dec 22 13:24:27.631: INFO: Got endpoints: latency-svc-tlpb9 [1.463315191s] Dec 22 13:24:27.687: INFO: Created: latency-svc-gxq4d Dec 22 13:24:27.754: INFO: Got endpoints: latency-svc-gxq4d [1.452906892s] Dec 22 13:24:27.853: INFO: Created: latency-svc-fcdrs Dec 22 13:24:27.947: INFO: Got endpoints: latency-svc-fcdrs [1.608009439s] Dec 22 13:24:27.952: INFO: Created: latency-svc-vx6fn Dec 22 13:24:27.958: INFO: Got endpoints: latency-svc-vx6fn [1.454193681s] Dec 22 13:24:28.013: INFO: Created: latency-svc-lksp5 Dec 22 13:24:28.130: INFO: Got endpoints: latency-svc-lksp5 [1.430554177s] Dec 22 13:24:28.148: INFO: Created: latency-svc-qcr68 Dec 22 13:24:28.184: INFO: Got endpoints: latency-svc-qcr68 [1.416012228s] Dec 22 13:24:28.191: INFO: Created: latency-svc-nl26n Dec 22 13:24:28.196: INFO: Got endpoints: latency-svc-nl26n [1.408091804s] Dec 22 13:24:28.302: INFO: Created: latency-svc-d2rff Dec 22 13:24:28.303: INFO: Got endpoints: latency-svc-d2rff [1.391504104s] Dec 22 13:24:28.347: INFO: Created: latency-svc-twmd9 Dec 22 13:24:28.369: INFO: Got endpoints: latency-svc-twmd9 [1.379639887s] Dec 22 13:24:28.583: INFO: Created: latency-svc-hcr8k Dec 22 13:24:28.587: INFO: Got endpoints: latency-svc-hcr8k [1.471308201s] Dec 22 13:24:28.687: INFO: Created: latency-svc-65ckd Dec 22 13:24:28.696: INFO: Got endpoints: latency-svc-65ckd [1.559832068s] Dec 22 13:24:28.745: INFO: Created: latency-svc-h7996 Dec 22 13:24:28.908: INFO: Got endpoints: latency-svc-h7996 [1.634913537s] Dec 22 13:24:28.912: INFO: Created: latency-svc-kwhjk Dec 22 13:24:28.940: INFO: Got endpoints: latency-svc-kwhjk [1.579474307s] Dec 22 13:24:29.122: INFO: Created: latency-svc-757hs Dec 22 13:24:29.147: INFO: Got endpoints: latency-svc-757hs [1.700213189s] Dec 22 13:24:29.147: INFO: Latencies: [180.45112ms 197.87474ms 232.847084ms 367.232495ms 426.39917ms 565.746448ms 783.949542ms 953.546087ms 1.034387633s 1.217527412s 1.289727861s 1.379639887s 1.391504104s 1.408091804s 1.416012228s 1.42296604s 1.426089626s 1.430554177s 1.436547692s 1.452906892s 1.454193681s 1.462118445s 1.463315191s 1.466011948s 1.471308201s 1.475557524s 1.478050178s 1.489203392s 1.510605152s 1.519989027s 1.534134171s 1.535266761s 1.539741881s 1.546372598s 1.555641339s 1.559832068s 1.561801586s 1.564823575s 1.568794784s 1.571290876s 1.572315558s 1.579474307s 1.588029367s 1.589566225s 1.608009439s 1.608359072s 1.609188043s 1.615068024s 1.615746571s 1.619035014s 1.619484088s 1.622439587s 1.634879537s 1.634913537s 1.636699111s 1.638193638s 1.639964083s 1.646289793s 1.675666137s 1.682535248s 1.685775349s 1.687886837s 1.700213189s 1.703360361s 1.703602417s 1.705322737s 1.709854966s 1.732426525s 1.735401457s 1.741052751s 1.746095282s 1.747595499s 1.751706135s 1.754653284s 1.757175442s 1.759309535s 1.76250716s 1.772229973s 1.780544578s 1.78626041s 1.787458295s 1.792480258s 1.792951409s 1.796873985s 1.797955379s 1.799736242s 1.801803738s 1.803464225s 1.804549533s 1.805511899s 1.810231293s 1.814846908s 1.817084995s 1.819548303s 1.844010955s 1.84837421s 1.850756912s 1.868158682s 1.8688933s 1.870437152s 1.878762036s 1.880674688s 1.887616369s 1.890592873s 1.89710653s 1.89779752s 1.90115627s 1.903833448s 1.90961781s 1.919330988s 1.943260641s 1.946713443s 1.952612979s 1.954351841s 1.963715326s 1.96387866s 1.964307648s 1.969314349s 1.97943909s 1.990468475s 2.009813474s 2.010344731s 2.01342421s 2.01382568s 2.015347487s 2.015928196s 2.018108332s 2.043303011s 2.054655255s 2.055796146s 2.059213987s 2.074560345s 2.085245336s 2.089607978s 2.10067266s 2.127006991s 2.134816557s 2.136810934s 2.137063167s 2.139947014s 2.147476175s 2.148113992s 2.154706563s 2.15862688s 2.170169331s 2.177088343s 2.177863286s 2.180093744s 2.180291324s 2.185826146s 2.20258625s 2.202945314s 2.211718462s 2.216928659s 2.229262438s 2.22959463s 2.22961349s 2.237437141s 2.241103367s 2.248450172s 2.251685748s 2.278913481s 2.281011907s 2.282195517s 2.302674335s 2.310842376s 2.314667335s 2.315481249s 2.325407799s 2.331942571s 2.350461767s 2.380256502s 2.402099211s 2.425530257s 2.436094847s 2.445489186s 2.452003733s 2.49404818s 2.494057483s 2.500917134s 2.510219329s 2.512952899s 2.543633175s 2.54650449s 2.552501531s 2.561217606s 2.581950492s 2.585817771s 2.592127097s 2.608121883s 2.642632139s 2.645822798s 2.65927329s 2.687370693s 2.715302909s 2.761055301s 2.762765977s 2.784576557s 2.821972773s 2.867250578s] Dec 22 13:24:29.147: INFO: 50 %ile: 1.878762036s Dec 22 13:24:29.147: INFO: 90 %ile: 2.510219329s Dec 22 13:24:29.147: INFO: 99 %ile: 2.821972773s Dec 22 13:24:29.147: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:24:29.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5320" for this suite. Dec 22 13:25:15.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:25:15.491: INFO: namespace svc-latency-5320 deletion completed in 46.308240588s • [SLOW TEST:80.779 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:25:15.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 22 13:25:26.201: INFO: Successfully updated pod "labelsupdate7feb7808-7b08-4b7f-bdd3-540042d34c7a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:25:28.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8484" for this suite. Dec 22 13:25:50.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:25:50.444: INFO: namespace projected-8484 deletion completed in 22.138001879s • [SLOW TEST:34.953 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:25:50.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-9e66b798-945e-4d2d-ab92-2172cd0e6447 STEP: Creating configMap with name cm-test-opt-upd-180f79ae-ffcb-4594-85a1-35e3f1cd5e72 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9e66b798-945e-4d2d-ab92-2172cd0e6447 STEP: Updating configmap cm-test-opt-upd-180f79ae-ffcb-4594-85a1-35e3f1cd5e72 STEP: Creating configMap with name cm-test-opt-create-540cc362-17b3-4697-963b-7addccdd46a0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:26:07.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7403" for this suite. Dec 22 13:26:29.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:26:29.253: INFO: namespace projected-7403 deletion completed in 22.214295308s • [SLOW TEST:38.808 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:26:29.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-f089e182-dd18-40fb-b9c8-75970e0a0e49 STEP: Creating a pod to test consume secrets Dec 22 13:26:29.410: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94" in namespace "projected-8396" to be "success or failure" Dec 22 13:26:29.426: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Pending", Reason="", readiness=false. Elapsed: 16.146359ms Dec 22 13:26:31.437: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027706289s Dec 22 13:26:33.443: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033433066s Dec 22 13:26:35.448: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038828982s Dec 22 13:26:37.455: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045440932s Dec 22 13:26:39.463: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053449872s STEP: Saw pod success Dec 22 13:26:39.463: INFO: Pod "pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94" satisfied condition "success or failure" Dec 22 13:26:39.467: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94 container projected-secret-volume-test: STEP: delete the pod Dec 22 13:26:39.933: INFO: Waiting for pod pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94 to disappear Dec 22 13:26:39.942: INFO: Pod pod-projected-secrets-b5c690dc-489c-44c1-8697-6f65ec095e94 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:26:39.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8396" for this suite. Dec 22 13:26:45.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:26:46.064: INFO: namespace projected-8396 deletion completed in 6.114177546s • [SLOW TEST:16.811 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:26:46.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:26:54.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3704" for this suite. Dec 22 13:27:56.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:27:56.488: INFO: namespace kubelet-test-3704 deletion completed in 1m2.237271259s • [SLOW TEST:70.424 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:27:56.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-0b083024-24ff-4a61-8d73-17cbb42f67ad STEP: Creating a pod to test consume configMaps Dec 22 13:27:56.679: INFO: Waiting up to 5m0s for pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e" in namespace "configmap-2712" to be "success or failure" Dec 22 13:27:56.687: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.716323ms Dec 22 13:27:58.702: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02267891s Dec 22 13:28:00.719: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039923463s Dec 22 13:28:02.726: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046271758s Dec 22 13:28:04.735: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056070653s Dec 22 13:28:06.744: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064563161s STEP: Saw pod success Dec 22 13:28:06.744: INFO: Pod "pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e" satisfied condition "success or failure" Dec 22 13:28:06.757: INFO: Trying to get logs from node iruya-node pod pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e container configmap-volume-test: STEP: delete the pod Dec 22 13:28:06.834: INFO: Waiting for pod pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e to disappear Dec 22 13:28:06.845: INFO: Pod pod-configmaps-04879545-e2cd-4689-b179-af27ca68a20e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:28:06.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2712" for this suite. Dec 22 13:28:13.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:28:13.600: INFO: namespace configmap-2712 deletion completed in 6.747139821s • [SLOW TEST:17.111 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:28:13.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-e1925c1a-e987-4603-93b2-361cc61e1d17 STEP: Creating a pod to test consume secrets Dec 22 13:28:13.740: INFO: Waiting up to 5m0s for pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70" in namespace "secrets-596" to be "success or failure" Dec 22 13:28:13.745: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Pending", Reason="", readiness=false. Elapsed: 5.737497ms Dec 22 13:28:15.752: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012805267s Dec 22 13:28:17.764: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024737183s Dec 22 13:28:19.771: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031114131s Dec 22 13:28:21.786: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045915336s Dec 22 13:28:23.818: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07824658s STEP: Saw pod success Dec 22 13:28:23.818: INFO: Pod "pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70" satisfied condition "success or failure" Dec 22 13:28:23.834: INFO: Trying to get logs from node iruya-node pod pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70 container secret-volume-test: STEP: delete the pod Dec 22 13:28:24.247: INFO: Waiting for pod pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70 to disappear Dec 22 13:28:24.266: INFO: Pod pod-secrets-bc6f3823-53bb-44b1-b7db-192cd5ebca70 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:28:24.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-596" for this suite. Dec 22 13:28:30.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:28:30.565: INFO: namespace secrets-596 deletion completed in 6.280307874s • [SLOW TEST:16.965 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:28:30.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 22 13:28:30.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1364' Dec 22 13:28:32.791: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 22 13:28:32.791: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Dec 22 13:28:32.819: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Dec 22 13:28:32.820: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Dec 22 13:28:32.869: INFO: scanned /root for discovery docs: Dec 22 13:28:32.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1364' Dec 22 13:28:55.251: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 22 13:28:55.251: INFO: stdout: "Created e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce\nScaling up e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Dec 22 13:28:55.251: INFO: stdout: "Created e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce\nScaling up e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Dec 22 13:28:55.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1364' Dec 22 13:28:55.414: INFO: stderr: "" Dec 22 13:28:55.414: INFO: stdout: "e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce-gdsx5 " Dec 22 13:28:55.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce-gdsx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1364' Dec 22 13:28:55.521: INFO: stderr: "" Dec 22 13:28:55.522: INFO: stdout: "true" Dec 22 13:28:55.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce-gdsx5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1364' Dec 22 13:28:55.621: INFO: stderr: "" Dec 22 13:28:55.621: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Dec 22 13:28:55.621: INFO: e2e-test-nginx-rc-2b04ff2884301b0c9134bc64bb8a32ce-gdsx5 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Dec 22 13:28:55.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1364' Dec 22 13:28:55.755: INFO: stderr: "" Dec 22 13:28:55.755: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:28:55.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1364" for this suite. Dec 22 13:29:01.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:29:01.899: INFO: namespace kubectl-1364 deletion completed in 6.133640357s • [SLOW TEST:31.333 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:29:01.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Dec 22 13:29:12.613: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-283 pod-service-account-74c7f779-870a-42c6-8ebf-e99cc6c0328d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Dec 22 13:29:13.084: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-283 pod-service-account-74c7f779-870a-42c6-8ebf-e99cc6c0328d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Dec 22 13:29:13.517: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-283 pod-service-account-74c7f779-870a-42c6-8ebf-e99cc6c0328d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:29:14.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-283" for this suite. Dec 22 13:29:20.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:29:20.287: INFO: namespace svcaccounts-283 deletion completed in 6.220934125s • [SLOW TEST:18.388 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:29:20.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 22 13:29:20.424: INFO: Waiting up to 5m0s for pod "pod-33eda857-620b-424b-9c71-7bceac705e1b" in namespace "emptydir-8225" to be "success or failure" Dec 22 13:29:20.505: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 80.80909ms Dec 22 13:29:22.516: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092080326s Dec 22 13:29:24.531: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107164485s Dec 22 13:29:26.549: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124472838s Dec 22 13:29:28.564: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139956287s Dec 22 13:29:30.577: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.153268728s Dec 22 13:29:32.624: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.200222897s Dec 22 13:29:34.640: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.215928948s STEP: Saw pod success Dec 22 13:29:34.640: INFO: Pod "pod-33eda857-620b-424b-9c71-7bceac705e1b" satisfied condition "success or failure" Dec 22 13:29:34.648: INFO: Trying to get logs from node iruya-node pod pod-33eda857-620b-424b-9c71-7bceac705e1b container test-container: STEP: delete the pod Dec 22 13:29:34.825: INFO: Waiting for pod pod-33eda857-620b-424b-9c71-7bceac705e1b to disappear Dec 22 13:29:34.881: INFO: Pod pod-33eda857-620b-424b-9c71-7bceac705e1b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:29:34.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8225" for this suite. Dec 22 13:29:40.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:29:41.060: INFO: namespace emptydir-8225 deletion completed in 6.161737534s • [SLOW TEST:20.772 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:29:41.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-5743/secret-test-fd2dbb22-dd43-4546-b3b2-ae4118054d61 STEP: Creating a pod to test consume secrets Dec 22 13:29:41.184: INFO: Waiting up to 5m0s for pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab" in namespace "secrets-5743" to be "success or failure" Dec 22 13:29:41.193: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 9.247909ms Dec 22 13:29:43.200: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016420651s Dec 22 13:29:45.243: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058909328s Dec 22 13:29:47.253: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069372195s Dec 22 13:29:49.263: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078888968s Dec 22 13:29:51.273: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089232913s STEP: Saw pod success Dec 22 13:29:51.273: INFO: Pod "pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab" satisfied condition "success or failure" Dec 22 13:29:51.279: INFO: Trying to get logs from node iruya-node pod pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab container env-test: STEP: delete the pod Dec 22 13:29:51.376: INFO: Waiting for pod pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab to disappear Dec 22 13:29:51.382: INFO: Pod pod-configmaps-36e5220d-2836-4aa6-9c05-90e73bd2f4ab no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:29:51.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5743" for this suite. Dec 22 13:29:57.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:29:57.573: INFO: namespace secrets-5743 deletion completed in 6.183122857s • [SLOW TEST:16.513 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:29:57.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 22 13:29:57.637: INFO: PodSpec: initContainers in spec.initContainers Dec 22 13:30:58.873: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e2d3ba29-e5eb-497c-bb5e-91adafbc8fb1", GenerateName:"", Namespace:"init-container-381", SelfLink:"/api/v1/namespaces/init-container-381/pods/pod-init-e2d3ba29-e5eb-497c-bb5e-91adafbc8fb1", UID:"0022552a-0c59-402e-b2f1-1246af32311a", ResourceVersion:"17641240", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712618197, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"637705430"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-n7ghm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0000b40c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n7ghm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n7ghm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n7ghm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c760e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00206c060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001c761f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001c76250)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001c76258), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001c7625c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712618197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712618197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712618197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712618197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0024aa080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000df20e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000df21c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://4f45cba8adb7ca3d43f6bca355cd230fa67d75d949898d0a54d709afafda96c7"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024aa0c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024aa0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:30:58.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-381" for this suite. Dec 22 13:31:21.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:31:21.159: INFO: namespace init-container-381 deletion completed in 22.169751755s • [SLOW TEST:83.586 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:31:21.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 22 13:31:21.306: INFO: Waiting up to 5m0s for pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424" in namespace "emptydir-2444" to be "success or failure" Dec 22 13:31:21.316: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Pending", Reason="", readiness=false. Elapsed: 9.607183ms Dec 22 13:31:23.322: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015694903s Dec 22 13:31:25.326: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019562353s Dec 22 13:31:27.336: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029137787s Dec 22 13:31:29.342: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035788636s Dec 22 13:31:31.918: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.611550918s STEP: Saw pod success Dec 22 13:31:31.918: INFO: Pod "pod-a9298732-da6f-4fbb-be5b-5aeadb00e424" satisfied condition "success or failure" Dec 22 13:31:31.931: INFO: Trying to get logs from node iruya-node pod pod-a9298732-da6f-4fbb-be5b-5aeadb00e424 container test-container: STEP: delete the pod Dec 22 13:31:32.063: INFO: Waiting for pod pod-a9298732-da6f-4fbb-be5b-5aeadb00e424 to disappear Dec 22 13:31:32.080: INFO: Pod pod-a9298732-da6f-4fbb-be5b-5aeadb00e424 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:31:32.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2444" for this suite. Dec 22 13:31:38.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:31:38.175: INFO: namespace emptydir-2444 deletion completed in 6.088173035s • [SLOW TEST:17.015 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:31:38.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-77f7dea3-0fbb-41fc-b7cf-08c0a9956380 in namespace container-probe-4175 Dec 22 13:31:46.346: INFO: Started pod liveness-77f7dea3-0fbb-41fc-b7cf-08c0a9956380 in namespace container-probe-4175 STEP: checking the pod's current state and verifying that restartCount is present Dec 22 13:31:46.350: INFO: Initial restart count of pod liveness-77f7dea3-0fbb-41fc-b7cf-08c0a9956380 is 0 Dec 22 13:32:12.833: INFO: Restart count of pod container-probe-4175/liveness-77f7dea3-0fbb-41fc-b7cf-08c0a9956380 is now 1 (26.482763147s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 22 13:32:12.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4175" for this suite. Dec 22 13:32:18.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 13:32:19.058: INFO: namespace container-probe-4175 deletion completed in 6.182223221s • [SLOW TEST:40.883 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 22 13:32:19.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 22 13:32:19.212: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: alternatives.log alternatives.l... (200; 22.470617ms) Dec 22 13:32:19.221: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/:alternatives.log alternatives.l... (200; 8.647646ms) Dec 22 13:32:19.232: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/:alternatives.log alternatives.l... (200; 10.985047ms) Dec 22 13:32:19.241: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: