I0622 12:55:54.793098 7 e2e.go:243] Starting e2e run "f3ca41a4-0a95-4d7d-8964-dd6f46d82336" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1592830553 - Will randomize all specs Will run 215 of 4412 specs Jun 22 12:55:54.981: INFO: >>> kubeConfig: /root/.kube/config Jun 22 12:55:54.985: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 22 12:55:55.002: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 22 12:55:55.041: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 22 12:55:55.041: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 22 12:55:55.041: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 22 12:55:55.056: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 22 12:55:55.056: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 22 12:55:55.056: INFO: e2e test version: v1.15.11 Jun 22 12:55:55.057: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 12:55:55.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jun 22 12:55:55.117: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-28083a12-c43e-40f8-be02-d55681a393d2 STEP: Creating a pod to test consume secrets Jun 22 12:55:55.127: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-01a6e3e2-22d4-4673-9de8-2b2a805ac73f" in namespace "projected-7653" to be "success or failure" Jun 22 12:55:55.143: INFO: Pod "pod-projected-secrets-01a6e3e2-22d4-4673-9de8-2b2a805ac73f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.951105ms Jun 22 12:55:57.147: INFO: Pod "pod-projected-secrets-01a6e3e2-22d4-4673-9de8-2b2a805ac73f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02033373s Jun 22 12:55:59.152: INFO: Pod "pod-projected-secrets-01a6e3e2-22d4-4673-9de8-2b2a805ac73f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024673521s STEP: Saw pod success Jun 22 12:55:59.152: INFO: Pod "pod-projected-secrets-01a6e3e2-22d4-4673-9de8-2b2a805ac73f" satisfied condition "success or failure" Jun 22 12:55:59.154: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-01a6e3e2-22d4-4673-9de8-2b2a805ac73f container projected-secret-volume-test: STEP: delete the pod Jun 22 12:55:59.194: INFO: Waiting for pod pod-projected-secrets-01a6e3e2-22d4-4673-9de8-2b2a805ac73f to disappear Jun 22 12:55:59.210: INFO: Pod pod-projected-secrets-01a6e3e2-22d4-4673-9de8-2b2a805ac73f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 12:55:59.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7653" for this suite. Jun 22 12:56:05.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:56:05.296: INFO: namespace projected-7653 deletion completed in 6.082798154s • [SLOW TEST:10.239 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 12:56:05.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5e3c2577-64cd-455d-be2a-6a38df1d7857 STEP: Creating a pod to test consume secrets Jun 22 12:56:05.429: INFO: Waiting up to 5m0s for pod "pod-secrets-718ca758-b116-4900-bba7-9bacdf208b16" in namespace "secrets-2518" to be "success or failure" Jun 22 12:56:05.458: INFO: Pod "pod-secrets-718ca758-b116-4900-bba7-9bacdf208b16": Phase="Pending", Reason="", readiness=false. Elapsed: 28.853972ms Jun 22 12:56:07.544: INFO: Pod "pod-secrets-718ca758-b116-4900-bba7-9bacdf208b16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114406061s Jun 22 12:56:09.548: INFO: Pod "pod-secrets-718ca758-b116-4900-bba7-9bacdf208b16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118894987s STEP: Saw pod success Jun 22 12:56:09.548: INFO: Pod "pod-secrets-718ca758-b116-4900-bba7-9bacdf208b16" satisfied condition "success or failure" Jun 22 12:56:09.552: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-718ca758-b116-4900-bba7-9bacdf208b16 container secret-volume-test: STEP: delete the pod Jun 22 12:56:09.621: INFO: Waiting for pod pod-secrets-718ca758-b116-4900-bba7-9bacdf208b16 to disappear Jun 22 12:56:09.630: INFO: Pod pod-secrets-718ca758-b116-4900-bba7-9bacdf208b16 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 12:56:09.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2518" for this suite. Jun 22 12:56:15.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:56:15.722: INFO: namespace secrets-2518 deletion completed in 6.089728799s • [SLOW TEST:10.425 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 12:56:15.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-3c3c945b-298e-41d2-b9b8-2ca055c05771 STEP: Creating a pod to test consume configMaps Jun 22 12:56:15.790: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8f5ed7b-f156-4b4b-b932-4e5ae8709287" in namespace "configmap-9071" to be "success or failure" Jun 22 12:56:15.807: INFO: Pod "pod-configmaps-d8f5ed7b-f156-4b4b-b932-4e5ae8709287": Phase="Pending", Reason="", readiness=false. Elapsed: 16.827463ms Jun 22 12:56:17.812: INFO: Pod "pod-configmaps-d8f5ed7b-f156-4b4b-b932-4e5ae8709287": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021828908s Jun 22 12:56:19.868: INFO: Pod "pod-configmaps-d8f5ed7b-f156-4b4b-b932-4e5ae8709287": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077159889s Jun 22 12:56:21.871: INFO: Pod "pod-configmaps-d8f5ed7b-f156-4b4b-b932-4e5ae8709287": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080690932s STEP: Saw pod success Jun 22 12:56:21.871: INFO: Pod "pod-configmaps-d8f5ed7b-f156-4b4b-b932-4e5ae8709287" satisfied condition "success or failure" Jun 22 12:56:21.874: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d8f5ed7b-f156-4b4b-b932-4e5ae8709287 container configmap-volume-test: STEP: delete the pod Jun 22 12:56:21.907: INFO: Waiting for pod pod-configmaps-d8f5ed7b-f156-4b4b-b932-4e5ae8709287 to disappear Jun 22 12:56:21.939: INFO: Pod pod-configmaps-d8f5ed7b-f156-4b4b-b932-4e5ae8709287 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 12:56:21.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9071" for this suite. Jun 22 12:56:27.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:56:28.061: INFO: namespace configmap-9071 deletion completed in 6.118647502s • [SLOW TEST:12.339 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 12:56:28.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-e1d86ba8-4025-41e7-9354-758309f614c6 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 12:56:34.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5252" for this suite. Jun 22 12:56:56.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:56:56.406: INFO: namespace configmap-5252 deletion completed in 22.099579213s • [SLOW TEST:28.345 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 12:56:56.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-ec619f12-145f-46c6-9d9c-3b06e9184ee8 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 12:56:56.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6164" for this suite. Jun 22 12:57:02.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:57:02.570: INFO: namespace secrets-6164 deletion completed in 6.112052378s • [SLOW TEST:6.164 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 12:57:02.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 12:57:02.629: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8db837c4-bd12-4655-849d-fc706b467d0d" in namespace "downward-api-8148" to be "success or failure" Jun 22 12:57:02.634: INFO: Pod "downwardapi-volume-8db837c4-bd12-4655-849d-fc706b467d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19063ms Jun 22 12:57:04.639: INFO: Pod "downwardapi-volume-8db837c4-bd12-4655-849d-fc706b467d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009125877s Jun 22 12:57:06.646: INFO: Pod "downwardapi-volume-8db837c4-bd12-4655-849d-fc706b467d0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017028866s STEP: Saw pod success Jun 22 12:57:06.647: INFO: Pod "downwardapi-volume-8db837c4-bd12-4655-849d-fc706b467d0d" satisfied condition "success or failure" Jun 22 12:57:06.650: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8db837c4-bd12-4655-849d-fc706b467d0d container client-container: STEP: delete the pod Jun 22 12:57:06.671: INFO: Waiting for pod downwardapi-volume-8db837c4-bd12-4655-849d-fc706b467d0d to disappear Jun 22 12:57:06.675: INFO: Pod downwardapi-volume-8db837c4-bd12-4655-849d-fc706b467d0d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 12:57:06.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8148" for this suite. Jun 22 12:57:12.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:57:12.773: INFO: namespace downward-api-8148 deletion completed in 6.094991508s • [SLOW TEST:10.203 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 12:57:12.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-d07b1768-4a42-4075-9e5b-617193e9a182 STEP: Creating configMap with name cm-test-opt-upd-8d195199-eac0-4fea-a0bf-14de165ede90 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d07b1768-4a42-4075-9e5b-617193e9a182 STEP: Updating configmap cm-test-opt-upd-8d195199-eac0-4fea-a0bf-14de165ede90 STEP: Creating configMap with name cm-test-opt-create-c095245c-69e2-4121-9aaa-1e28c9e9dac4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 12:57:20.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5752" for this suite. Jun 22 12:57:43.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:57:43.150: INFO: namespace configmap-5752 deletion completed in 22.166962447s • [SLOW TEST:30.376 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 12:57:43.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 22 12:57:43.191: INFO: Waiting up to 5m0s for pod "downward-api-9da61728-f6f7-47e7-b7ad-c692a4fbf766" in namespace "downward-api-206" to be "success or failure" Jun 22 12:57:43.211: INFO: Pod "downward-api-9da61728-f6f7-47e7-b7ad-c692a4fbf766": Phase="Pending", Reason="", readiness=false. Elapsed: 19.738735ms Jun 22 12:57:45.215: INFO: Pod "downward-api-9da61728-f6f7-47e7-b7ad-c692a4fbf766": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023819411s Jun 22 12:57:47.219: INFO: Pod "downward-api-9da61728-f6f7-47e7-b7ad-c692a4fbf766": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027783143s STEP: Saw pod success Jun 22 12:57:47.219: INFO: Pod "downward-api-9da61728-f6f7-47e7-b7ad-c692a4fbf766" satisfied condition "success or failure" Jun 22 12:57:47.221: INFO: Trying to get logs from node iruya-worker pod downward-api-9da61728-f6f7-47e7-b7ad-c692a4fbf766 container dapi-container: STEP: delete the pod Jun 22 12:57:47.362: INFO: Waiting for pod downward-api-9da61728-f6f7-47e7-b7ad-c692a4fbf766 to disappear Jun 22 12:57:47.419: INFO: Pod downward-api-9da61728-f6f7-47e7-b7ad-c692a4fbf766 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 12:57:47.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-206" for this suite. Jun 22 12:57:53.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 12:57:53.531: INFO: namespace downward-api-206 deletion completed in 6.10914387s • [SLOW TEST:10.381 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 12:57:53.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 22 12:57:54.280: INFO: Pod name wrapped-volume-race-57aa5835-c869-4b91-b02a-de27a69a0271: Found 0 pods out of 5 Jun 22 12:57:59.966: INFO: Pod name wrapped-volume-race-57aa5835-c869-4b91-b02a-de27a69a0271: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-57aa5835-c869-4b91-b02a-de27a69a0271 in namespace emptydir-wrapper-1098, will wait for the garbage collector to delete the pods Jun 22 12:58:14.063: INFO: Deleting ReplicationController wrapped-volume-race-57aa5835-c869-4b91-b02a-de27a69a0271 took: 14.788868ms Jun 22 12:58:14.363: INFO: Terminating ReplicationController wrapped-volume-race-57aa5835-c869-4b91-b02a-de27a69a0271 pods took: 300.261153ms STEP: Creating RC which spawns configmap-volume pods Jun 22 12:58:52.412: INFO: Pod name wrapped-volume-race-f4120cb7-5056-4d58-b339-452d04e590d4: Found 0 pods out of 5 Jun 22 12:58:57.418: INFO: Pod name wrapped-volume-race-f4120cb7-5056-4d58-b339-452d04e590d4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f4120cb7-5056-4d58-b339-452d04e590d4 in namespace emptydir-wrapper-1098, will wait for the garbage collector to delete the pods Jun 22 12:59:11.516: INFO: Deleting ReplicationController wrapped-volume-race-f4120cb7-5056-4d58-b339-452d04e590d4 took: 8.205564ms Jun 22 12:59:11.816: INFO: Terminating ReplicationController wrapped-volume-race-f4120cb7-5056-4d58-b339-452d04e590d4 pods took: 300.346408ms STEP: Creating RC which spawns configmap-volume pods Jun 22 12:59:52.364: INFO: Pod name wrapped-volume-race-fe783236-e668-4f40-bd28-8a40ce8d682a: Found 0 pods out of 5 Jun 22 12:59:57.372: INFO: Pod name wrapped-volume-race-fe783236-e668-4f40-bd28-8a40ce8d682a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fe783236-e668-4f40-bd28-8a40ce8d682a in namespace emptydir-wrapper-1098, will wait for the garbage collector to delete the pods Jun 22 13:00:11.455: INFO: Deleting ReplicationController wrapped-volume-race-fe783236-e668-4f40-bd28-8a40ce8d682a took: 7.84607ms Jun 22 13:00:11.756: INFO: Terminating ReplicationController wrapped-volume-race-fe783236-e668-4f40-bd28-8a40ce8d682a pods took: 300.243426ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:00:52.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1098" for this suite. Jun 22 13:01:00.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:01:01.053: INFO: namespace emptydir-wrapper-1098 deletion completed in 8.105569953s • [SLOW TEST:187.522 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:01:01.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0622 13:01:02.180148 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 13:01:02.180: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:01:02.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8153" for this suite. Jun 22 13:01:08.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:01:08.411: INFO: namespace gc-8153 deletion completed in 6.229433337s • [SLOW TEST:7.358 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:01:08.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 13:01:08.446: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 22 13:01:10.532: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:01:12.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1876" for this suite. Jun 22 13:01:18.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:01:18.554: INFO: namespace replication-controller-1876 deletion completed in 6.395422093s • [SLOW TEST:10.142 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:01:18.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-4f5bfea5-efd1-46ca-ab4b-1eb9f55fd2a0 STEP: Creating a pod to test consume configMaps Jun 22 13:01:18.672: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e0a2776-a663-4a0a-833e-c698b1725ad7" in namespace "projected-8079" to be "success or failure" Jun 22 13:01:18.677: INFO: Pod "pod-projected-configmaps-6e0a2776-a663-4a0a-833e-c698b1725ad7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.874448ms Jun 22 13:01:20.702: INFO: Pod "pod-projected-configmaps-6e0a2776-a663-4a0a-833e-c698b1725ad7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029490522s Jun 22 13:01:22.855: INFO: Pod "pod-projected-configmaps-6e0a2776-a663-4a0a-833e-c698b1725ad7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.18243746s STEP: Saw pod success Jun 22 13:01:22.855: INFO: Pod "pod-projected-configmaps-6e0a2776-a663-4a0a-833e-c698b1725ad7" satisfied condition "success or failure" Jun 22 13:01:22.858: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-6e0a2776-a663-4a0a-833e-c698b1725ad7 container projected-configmap-volume-test: STEP: delete the pod Jun 22 13:01:22.882: INFO: Waiting for pod pod-projected-configmaps-6e0a2776-a663-4a0a-833e-c698b1725ad7 to disappear Jun 22 13:01:22.905: INFO: Pod pod-projected-configmaps-6e0a2776-a663-4a0a-833e-c698b1725ad7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:01:22.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8079" for this suite. Jun 22 13:01:28.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:01:29.041: INFO: namespace projected-8079 deletion completed in 6.132761862s • [SLOW TEST:10.487 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:01:29.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jun 22 13:01:29.125: INFO: Waiting up to 5m0s for pod "client-containers-ef574c42-8f30-41db-bb13-225c3f935392" in namespace "containers-9981" to be "success or failure" Jun 22 13:01:29.133: INFO: Pod "client-containers-ef574c42-8f30-41db-bb13-225c3f935392": Phase="Pending", Reason="", readiness=false. Elapsed: 7.632536ms Jun 22 13:01:31.138: INFO: Pod "client-containers-ef574c42-8f30-41db-bb13-225c3f935392": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012251442s Jun 22 13:01:33.178: INFO: Pod "client-containers-ef574c42-8f30-41db-bb13-225c3f935392": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052284628s STEP: Saw pod success Jun 22 13:01:33.178: INFO: Pod "client-containers-ef574c42-8f30-41db-bb13-225c3f935392" satisfied condition "success or failure" Jun 22 13:01:33.180: INFO: Trying to get logs from node iruya-worker pod client-containers-ef574c42-8f30-41db-bb13-225c3f935392 container test-container: STEP: delete the pod Jun 22 13:01:33.300: INFO: Waiting for pod client-containers-ef574c42-8f30-41db-bb13-225c3f935392 to disappear Jun 22 13:01:33.307: INFO: Pod client-containers-ef574c42-8f30-41db-bb13-225c3f935392 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:01:33.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9981" for this suite. Jun 22 13:01:39.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:01:39.399: INFO: namespace containers-9981 deletion completed in 6.087992983s • [SLOW TEST:10.357 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:01:39.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jun 22 13:01:39.562: INFO: Waiting up to 5m0s for pod "var-expansion-838c3b95-fa9c-485b-ba21-28ed7f51cbf3" in namespace "var-expansion-6992" to be "success or failure" Jun 22 13:01:39.651: INFO: Pod "var-expansion-838c3b95-fa9c-485b-ba21-28ed7f51cbf3": Phase="Pending", Reason="", readiness=false. Elapsed: 89.699929ms Jun 22 13:01:41.655: INFO: Pod "var-expansion-838c3b95-fa9c-485b-ba21-28ed7f51cbf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093852598s Jun 22 13:01:43.659: INFO: Pod "var-expansion-838c3b95-fa9c-485b-ba21-28ed7f51cbf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097743324s STEP: Saw pod success Jun 22 13:01:43.659: INFO: Pod "var-expansion-838c3b95-fa9c-485b-ba21-28ed7f51cbf3" satisfied condition "success or failure" Jun 22 13:01:43.662: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-838c3b95-fa9c-485b-ba21-28ed7f51cbf3 container dapi-container: STEP: delete the pod Jun 22 13:01:43.687: INFO: Waiting for pod var-expansion-838c3b95-fa9c-485b-ba21-28ed7f51cbf3 to disappear Jun 22 13:01:43.801: INFO: Pod var-expansion-838c3b95-fa9c-485b-ba21-28ed7f51cbf3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:01:43.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6992" for this suite. Jun 22 13:01:50.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:01:50.094: INFO: namespace var-expansion-6992 deletion completed in 6.288130488s • [SLOW TEST:10.695 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:01:50.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 22 13:01:50.213: INFO: Waiting up to 5m0s for pod "pod-d71bf93c-61af-4226-8c00-305be83d759d" in namespace "emptydir-4505" to be "success or failure" Jun 22 13:01:50.230: INFO: Pod "pod-d71bf93c-61af-4226-8c00-305be83d759d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.741045ms Jun 22 13:01:52.234: INFO: Pod "pod-d71bf93c-61af-4226-8c00-305be83d759d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020871645s Jun 22 13:01:54.239: INFO: Pod "pod-d71bf93c-61af-4226-8c00-305be83d759d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025506081s STEP: Saw pod success Jun 22 13:01:54.239: INFO: Pod "pod-d71bf93c-61af-4226-8c00-305be83d759d" satisfied condition "success or failure" Jun 22 13:01:54.242: INFO: Trying to get logs from node iruya-worker2 pod pod-d71bf93c-61af-4226-8c00-305be83d759d container test-container: STEP: delete the pod Jun 22 13:01:54.260: INFO: Waiting for pod pod-d71bf93c-61af-4226-8c00-305be83d759d to disappear Jun 22 13:01:54.265: INFO: Pod pod-d71bf93c-61af-4226-8c00-305be83d759d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:01:54.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4505" for this suite. Jun 22 13:02:00.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:02:00.388: INFO: namespace emptydir-4505 deletion completed in 6.118331663s • [SLOW TEST:10.293 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:02:00.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3d87e35d-a9a0-4fc5-82f5-01d5053cffab STEP: Creating a pod to test consume secrets Jun 22 13:02:00.549: INFO: Waiting up to 5m0s for pod "pod-secrets-5be11a25-4f0d-44bc-8a58-0d33ae79fd23" in namespace "secrets-6014" to be "success or failure" Jun 22 13:02:00.597: INFO: Pod "pod-secrets-5be11a25-4f0d-44bc-8a58-0d33ae79fd23": Phase="Pending", Reason="", readiness=false. Elapsed: 47.722372ms Jun 22 13:02:02.601: INFO: Pod "pod-secrets-5be11a25-4f0d-44bc-8a58-0d33ae79fd23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051973262s Jun 22 13:02:04.606: INFO: Pod "pod-secrets-5be11a25-4f0d-44bc-8a58-0d33ae79fd23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056330271s STEP: Saw pod success Jun 22 13:02:04.606: INFO: Pod "pod-secrets-5be11a25-4f0d-44bc-8a58-0d33ae79fd23" satisfied condition "success or failure" Jun 22 13:02:04.609: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-5be11a25-4f0d-44bc-8a58-0d33ae79fd23 container secret-volume-test: STEP: delete the pod Jun 22 13:02:04.677: INFO: Waiting for pod pod-secrets-5be11a25-4f0d-44bc-8a58-0d33ae79fd23 to disappear Jun 22 13:02:04.685: INFO: Pod pod-secrets-5be11a25-4f0d-44bc-8a58-0d33ae79fd23 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:02:04.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6014" for this suite. Jun 22 13:02:10.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:02:10.779: INFO: namespace secrets-6014 deletion completed in 6.090577525s • [SLOW TEST:10.391 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:02:10.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 22 13:02:10.907: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:10.925: INFO: Number of nodes with available pods: 0 Jun 22 13:02:10.925: INFO: Node iruya-worker is running more than one daemon pod Jun 22 13:02:11.930: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:11.933: INFO: Number of nodes with available pods: 0 Jun 22 13:02:11.933: INFO: Node iruya-worker is running more than one daemon pod Jun 22 13:02:13.096: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:13.371: INFO: Number of nodes with available pods: 0 Jun 22 13:02:13.371: INFO: Node iruya-worker is running more than one daemon pod Jun 22 13:02:13.930: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:13.934: INFO: Number of nodes with available pods: 0 Jun 22 13:02:13.934: INFO: Node iruya-worker is running more than one daemon pod Jun 22 13:02:14.930: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:14.933: INFO: Number of nodes with available pods: 2 Jun 22 13:02:14.933: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 22 13:02:14.963: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:14.966: INFO: Number of nodes with available pods: 1 Jun 22 13:02:14.966: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 13:02:15.971: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:15.974: INFO: Number of nodes with available pods: 1 Jun 22 13:02:15.974: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 13:02:16.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:16.976: INFO: Number of nodes with available pods: 1 Jun 22 13:02:16.976: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 13:02:17.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:17.975: INFO: Number of nodes with available pods: 1 Jun 22 13:02:17.975: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 13:02:18.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:18.976: INFO: Number of nodes with available pods: 1 Jun 22 13:02:18.976: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 13:02:19.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:19.976: INFO: Number of nodes with available pods: 1 Jun 22 13:02:19.976: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 13:02:20.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:20.975: INFO: Number of nodes with available pods: 1 Jun 22 13:02:20.975: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 13:02:21.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:21.975: INFO: Number of nodes with available pods: 1 Jun 22 13:02:21.975: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 13:02:22.971: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:22.975: INFO: Number of nodes with available pods: 1 Jun 22 13:02:22.975: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 13:02:23.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:23.975: INFO: Number of nodes with available pods: 1 Jun 22 13:02:23.975: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 13:02:24.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:24.975: INFO: Number of nodes with available pods: 1 Jun 22 13:02:24.975: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 13:02:25.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 13:02:25.976: INFO: Number of nodes with available pods: 2 Jun 22 13:02:25.976: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3621, will wait for the garbage collector to delete the pods Jun 22 13:02:26.039: INFO: Deleting DaemonSet.extensions daemon-set took: 6.494246ms Jun 22 13:02:26.339: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.259764ms Jun 22 13:02:31.943: INFO: Number of nodes with available pods: 0 Jun 22 13:02:31.943: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 13:02:31.947: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3621/daemonsets","resourceVersion":"17851606"},"items":null} Jun 22 13:02:31.950: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3621/pods","resourceVersion":"17851606"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:02:31.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3621" for this suite. Jun 22 13:02:38.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:02:38.081: INFO: namespace daemonsets-3621 deletion completed in 6.10317829s • [SLOW TEST:27.302 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:02:38.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-584949e8-988f-4076-a190-2c3cf15fe0f0 STEP: Creating a pod to test consume configMaps Jun 22 13:02:38.163: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50d358e4-b2fe-49be-b7ed-f1073e134347" in namespace "projected-3182" to be "success or failure" Jun 22 13:02:38.171: INFO: Pod "pod-projected-configmaps-50d358e4-b2fe-49be-b7ed-f1073e134347": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054899ms Jun 22 13:02:40.217: INFO: Pod "pod-projected-configmaps-50d358e4-b2fe-49be-b7ed-f1073e134347": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053942213s Jun 22 13:02:42.221: INFO: Pod "pod-projected-configmaps-50d358e4-b2fe-49be-b7ed-f1073e134347": Phase="Running", Reason="", readiness=true. Elapsed: 4.05850216s Jun 22 13:02:44.225: INFO: Pod "pod-projected-configmaps-50d358e4-b2fe-49be-b7ed-f1073e134347": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062505283s STEP: Saw pod success Jun 22 13:02:44.225: INFO: Pod "pod-projected-configmaps-50d358e4-b2fe-49be-b7ed-f1073e134347" satisfied condition "success or failure" Jun 22 13:02:44.228: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-50d358e4-b2fe-49be-b7ed-f1073e134347 container projected-configmap-volume-test: STEP: delete the pod Jun 22 13:02:44.250: INFO: Waiting for pod pod-projected-configmaps-50d358e4-b2fe-49be-b7ed-f1073e134347 to disappear Jun 22 13:02:44.254: INFO: Pod pod-projected-configmaps-50d358e4-b2fe-49be-b7ed-f1073e134347 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:02:44.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3182" for this suite. Jun 22 13:02:50.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:02:50.384: INFO: namespace projected-3182 deletion completed in 6.126603988s • [SLOW TEST:12.302 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:02:50.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jun 22 13:02:50.521: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2907" to be "success or failure" Jun 22 13:02:50.524: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.126408ms Jun 22 13:02:52.528: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006811446s Jun 22 13:02:54.628: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10703236s Jun 22 13:02:56.632: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11082132s STEP: Saw pod success Jun 22 13:02:56.632: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 22 13:02:56.634: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 22 13:02:56.663: INFO: Waiting for pod pod-host-path-test to disappear Jun 22 13:02:56.704: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:02:56.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2907" for this suite. Jun 22 13:03:02.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:03:02.872: INFO: namespace hostpath-2907 deletion completed in 6.164506757s • [SLOW TEST:12.488 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:03:02.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:03:08.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9219" for this suite. Jun 22 13:03:38.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:03:38.136: INFO: namespace replication-controller-9219 deletion completed in 30.099461281s • [SLOW TEST:35.262 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:03:38.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 13:03:38.251: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"cd19def2-9f08-414c-b541-40d5ede8549a", Controller:(*bool)(0xc002b7ca72), BlockOwnerDeletion:(*bool)(0xc002b7ca73)}} Jun 22 13:03:38.302: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7a0bf069-4a9a-46e1-a235-97b1d3381b9c", Controller:(*bool)(0xc002b7cbfa), BlockOwnerDeletion:(*bool)(0xc002b7cbfb)}} Jun 22 13:03:38.329: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"708d3618-901c-42a1-800b-558925a2d9f7", Controller:(*bool)(0xc002de16a2), BlockOwnerDeletion:(*bool)(0xc002de16a3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:03:43.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2044" for this suite. Jun 22 13:03:49.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:03:49.450: INFO: namespace gc-2044 deletion completed in 6.086428375s • [SLOW TEST:11.314 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:03:49.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-8169 I0622 13:03:49.570674 7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8169, replica count: 1 I0622 13:03:50.621497 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 13:03:51.621785 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 13:03:52.622010 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 13:03:53.622253 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 22 13:03:53.751: INFO: Created: latency-svc-6pzln Jun 22 13:03:53.779: INFO: Got endpoints: latency-svc-6pzln [56.480549ms] Jun 22 13:03:53.811: INFO: Created: latency-svc-cbqz2 Jun 22 13:03:53.826: INFO: Got endpoints: latency-svc-cbqz2 [47.115165ms] Jun 22 13:03:53.847: INFO: Created: latency-svc-lkmsp Jun 22 13:03:53.862: INFO: Got endpoints: latency-svc-lkmsp [83.269509ms] Jun 22 13:03:53.922: INFO: Created: latency-svc-mdrsb Jun 22 13:03:53.925: INFO: Got endpoints: latency-svc-mdrsb [146.448121ms] Jun 22 13:03:53.964: INFO: Created: latency-svc-rfg7k Jun 22 13:03:53.977: INFO: Got endpoints: latency-svc-rfg7k [198.74062ms] Jun 22 13:03:54.003: INFO: Created: latency-svc-qxdl4 Jun 22 13:03:54.114: INFO: Got endpoints: latency-svc-qxdl4 [335.082961ms] Jun 22 13:03:54.132: INFO: Created: latency-svc-x2mdj Jun 22 13:03:54.139: INFO: Got endpoints: latency-svc-x2mdj [359.980938ms] Jun 22 13:03:54.155: INFO: Created: latency-svc-mz84h Jun 22 13:03:54.187: INFO: Got endpoints: latency-svc-mz84h [408.621665ms] Jun 22 13:03:54.258: INFO: Created: latency-svc-xzmj9 Jun 22 13:03:54.267: INFO: Got endpoints: latency-svc-xzmj9 [487.819796ms] Jun 22 13:03:54.285: INFO: Created: latency-svc-ndd8g Jun 22 13:03:54.296: INFO: Got endpoints: latency-svc-ndd8g [517.229319ms] Jun 22 13:03:54.321: INFO: Created: latency-svc-68w46 Jun 22 13:03:54.347: INFO: Got endpoints: latency-svc-68w46 [568.482321ms] Jun 22 13:03:54.419: INFO: Created: latency-svc-d4kql Jun 22 13:03:54.422: INFO: Got endpoints: latency-svc-d4kql [643.597387ms] Jun 22 13:03:54.456: INFO: Created: latency-svc-xft59 Jun 22 13:03:54.477: INFO: Got endpoints: latency-svc-xft59 [698.19985ms] Jun 22 13:03:54.501: INFO: Created: latency-svc-wt6w7 Jun 22 13:03:54.570: INFO: Got endpoints: latency-svc-wt6w7 [790.909525ms] Jun 22 13:03:54.576: INFO: Created: latency-svc-gb6rp Jun 22 13:03:54.599: INFO: Got endpoints: latency-svc-gb6rp [820.504874ms] Jun 22 13:03:54.642: INFO: Created: latency-svc-sf6v2 Jun 22 13:03:54.658: INFO: Got endpoints: latency-svc-sf6v2 [878.874358ms] Jun 22 13:03:54.724: INFO: Created: latency-svc-p2929 Jun 22 13:03:54.740: INFO: Got endpoints: latency-svc-p2929 [914.452012ms] Jun 22 13:03:54.765: INFO: Created: latency-svc-kt946 Jun 22 13:03:54.778: INFO: Got endpoints: latency-svc-kt946 [916.172568ms] Jun 22 13:03:54.810: INFO: Created: latency-svc-27bzf Jun 22 13:03:54.874: INFO: Got endpoints: latency-svc-27bzf [948.615492ms] Jun 22 13:03:54.894: INFO: Created: latency-svc-754gk Jun 22 13:03:54.905: INFO: Got endpoints: latency-svc-754gk [927.351063ms] Jun 22 13:03:54.933: INFO: Created: latency-svc-nl8hb Jun 22 13:03:55.042: INFO: Got endpoints: latency-svc-nl8hb [928.421372ms] Jun 22 13:03:55.044: INFO: Created: latency-svc-c7bv8 Jun 22 13:03:55.049: INFO: Got endpoints: latency-svc-c7bv8 [910.661681ms] Jun 22 13:03:55.074: INFO: Created: latency-svc-jp6w7 Jun 22 13:03:55.104: INFO: Got endpoints: latency-svc-jp6w7 [916.131043ms] Jun 22 13:03:55.228: INFO: Created: latency-svc-k5tzl Jun 22 13:03:55.236: INFO: Got endpoints: latency-svc-k5tzl [968.925041ms] Jun 22 13:03:55.278: INFO: Created: latency-svc-sw7kh Jun 22 13:03:55.315: INFO: Got endpoints: latency-svc-sw7kh [1.018864898s] Jun 22 13:03:55.372: INFO: Created: latency-svc-brn5g Jun 22 13:03:55.378: INFO: Got endpoints: latency-svc-brn5g [1.030733712s] Jun 22 13:03:55.404: INFO: Created: latency-svc-pc5kj Jun 22 13:03:55.410: INFO: Got endpoints: latency-svc-pc5kj [987.990957ms] Jun 22 13:03:55.428: INFO: Created: latency-svc-cm5fn Jun 22 13:03:55.441: INFO: Got endpoints: latency-svc-cm5fn [964.071172ms] Jun 22 13:03:55.459: INFO: Created: latency-svc-z6rls Jun 22 13:03:55.534: INFO: Got endpoints: latency-svc-z6rls [964.04904ms] Jun 22 13:03:55.557: INFO: Created: latency-svc-zxjw4 Jun 22 13:03:55.593: INFO: Got endpoints: latency-svc-zxjw4 [993.104281ms] Jun 22 13:03:55.707: INFO: Created: latency-svc-2zhdf Jun 22 13:03:55.718: INFO: Got endpoints: latency-svc-2zhdf [1.06005951s] Jun 22 13:03:55.743: INFO: Created: latency-svc-qtm59 Jun 22 13:03:55.748: INFO: Got endpoints: latency-svc-qtm59 [1.007401763s] Jun 22 13:03:55.767: INFO: Created: latency-svc-fbfv8 Jun 22 13:03:55.779: INFO: Got endpoints: latency-svc-fbfv8 [1.000266599s] Jun 22 13:03:55.797: INFO: Created: latency-svc-l8g2n Jun 22 13:03:55.832: INFO: Got endpoints: latency-svc-l8g2n [958.273588ms] Jun 22 13:03:55.855: INFO: Created: latency-svc-jlw9d Jun 22 13:03:55.883: INFO: Got endpoints: latency-svc-jlw9d [978.505985ms] Jun 22 13:03:55.914: INFO: Created: latency-svc-hpl9w Jun 22 13:03:55.929: INFO: Got endpoints: latency-svc-hpl9w [886.878366ms] Jun 22 13:03:55.982: INFO: Created: latency-svc-8w4cs Jun 22 13:03:55.985: INFO: Got endpoints: latency-svc-8w4cs [935.411221ms] Jun 22 13:03:56.007: INFO: Created: latency-svc-th4bh Jun 22 13:03:56.020: INFO: Got endpoints: latency-svc-th4bh [915.877516ms] Jun 22 13:03:56.049: INFO: Created: latency-svc-vqdvc Jun 22 13:03:56.062: INFO: Got endpoints: latency-svc-vqdvc [826.827192ms] Jun 22 13:03:56.140: INFO: Created: latency-svc-6g84n Jun 22 13:03:56.147: INFO: Got endpoints: latency-svc-6g84n [831.765152ms] Jun 22 13:03:56.187: INFO: Created: latency-svc-vhxlc Jun 22 13:03:56.223: INFO: Got endpoints: latency-svc-vhxlc [844.417771ms] Jun 22 13:03:56.277: INFO: Created: latency-svc-xv78t Jun 22 13:03:56.285: INFO: Got endpoints: latency-svc-xv78t [874.759326ms] Jun 22 13:03:56.303: INFO: Created: latency-svc-v4qh4 Jun 22 13:03:56.328: INFO: Got endpoints: latency-svc-v4qh4 [886.310411ms] Jun 22 13:03:56.402: INFO: Created: latency-svc-lcs57 Jun 22 13:03:56.406: INFO: Got endpoints: latency-svc-lcs57 [871.922266ms] Jun 22 13:03:56.433: INFO: Created: latency-svc-wnz96 Jun 22 13:03:56.457: INFO: Got endpoints: latency-svc-wnz96 [864.301715ms] Jun 22 13:03:56.490: INFO: Created: latency-svc-f59n4 Jun 22 13:03:56.539: INFO: Got endpoints: latency-svc-f59n4 [821.189815ms] Jun 22 13:03:56.544: INFO: Created: latency-svc-w424p Jun 22 13:03:56.557: INFO: Got endpoints: latency-svc-w424p [808.784703ms] Jun 22 13:03:56.574: INFO: Created: latency-svc-59tp5 Jun 22 13:03:56.587: INFO: Got endpoints: latency-svc-59tp5 [808.09311ms] Jun 22 13:03:56.607: INFO: Created: latency-svc-njxxl Jun 22 13:03:56.623: INFO: Got endpoints: latency-svc-njxxl [790.817934ms] Jun 22 13:03:57.325: INFO: Created: latency-svc-zv776 Jun 22 13:03:57.332: INFO: Got endpoints: latency-svc-zv776 [1.44823886s] Jun 22 13:03:57.990: INFO: Created: latency-svc-jsbs5 Jun 22 13:03:58.015: INFO: Got endpoints: latency-svc-jsbs5 [2.085560924s] Jun 22 13:03:58.048: INFO: Created: latency-svc-g5wwt Jun 22 13:03:58.062: INFO: Got endpoints: latency-svc-g5wwt [2.076972724s] Jun 22 13:03:58.083: INFO: Created: latency-svc-69pwv Jun 22 13:03:58.110: INFO: Got endpoints: latency-svc-69pwv [2.09092067s] Jun 22 13:03:58.164: INFO: Created: latency-svc-c26wd Jun 22 13:03:58.170: INFO: Got endpoints: latency-svc-c26wd [2.107820181s] Jun 22 13:03:58.258: INFO: Created: latency-svc-jv5ql Jun 22 13:03:58.260: INFO: Got endpoints: latency-svc-jv5ql [2.113759728s] Jun 22 13:03:58.305: INFO: Created: latency-svc-dds8t Jun 22 13:03:58.309: INFO: Got endpoints: latency-svc-dds8t [2.086240247s] Jun 22 13:03:58.426: INFO: Created: latency-svc-xrcb8 Jun 22 13:03:58.429: INFO: Got endpoints: latency-svc-xrcb8 [2.143885615s] Jun 22 13:03:58.454: INFO: Created: latency-svc-4vn2h Jun 22 13:03:58.465: INFO: Got endpoints: latency-svc-4vn2h [2.137493967s] Jun 22 13:03:58.486: INFO: Created: latency-svc-qzrl7 Jun 22 13:03:58.496: INFO: Got endpoints: latency-svc-qzrl7 [2.090037197s] Jun 22 13:03:58.521: INFO: Created: latency-svc-2cj2b Jun 22 13:03:58.581: INFO: Got endpoints: latency-svc-2cj2b [2.124286596s] Jun 22 13:03:58.583: INFO: Created: latency-svc-5lf2q Jun 22 13:03:58.592: INFO: Got endpoints: latency-svc-5lf2q [2.053100913s] Jun 22 13:03:58.620: INFO: Created: latency-svc-wc46p Jun 22 13:03:58.641: INFO: Got endpoints: latency-svc-wc46p [2.084059248s] Jun 22 13:03:58.659: INFO: Created: latency-svc-5cgng Jun 22 13:03:58.671: INFO: Got endpoints: latency-svc-5cgng [2.083814012s] Jun 22 13:03:58.759: INFO: Created: latency-svc-lbdw2 Jun 22 13:03:58.768: INFO: Got endpoints: latency-svc-lbdw2 [2.144873218s] Jun 22 13:03:58.837: INFO: Created: latency-svc-2s4rq Jun 22 13:03:58.852: INFO: Got endpoints: latency-svc-2s4rq [1.519803184s] Jun 22 13:03:58.935: INFO: Created: latency-svc-mhj84 Jun 22 13:03:58.938: INFO: Got endpoints: latency-svc-mhj84 [923.066512ms] Jun 22 13:03:58.978: INFO: Created: latency-svc-nnzvz Jun 22 13:03:58.991: INFO: Got endpoints: latency-svc-nnzvz [928.640659ms] Jun 22 13:03:59.013: INFO: Created: latency-svc-js6c8 Jun 22 13:03:59.026: INFO: Got endpoints: latency-svc-js6c8 [915.809093ms] Jun 22 13:03:59.079: INFO: Created: latency-svc-2ckhb Jun 22 13:03:59.081: INFO: Got endpoints: latency-svc-2ckhb [910.706495ms] Jun 22 13:03:59.106: INFO: Created: latency-svc-g675p Jun 22 13:03:59.123: INFO: Got endpoints: latency-svc-g675p [862.760809ms] Jun 22 13:03:59.146: INFO: Created: latency-svc-277kk Jun 22 13:03:59.160: INFO: Got endpoints: latency-svc-277kk [850.67591ms] Jun 22 13:03:59.240: INFO: Created: latency-svc-sd5rd Jun 22 13:03:59.242: INFO: Got endpoints: latency-svc-sd5rd [813.167282ms] Jun 22 13:03:59.295: INFO: Created: latency-svc-rcwtn Jun 22 13:03:59.311: INFO: Got endpoints: latency-svc-rcwtn [846.097393ms] Jun 22 13:03:59.335: INFO: Created: latency-svc-bkqqz Jun 22 13:03:59.371: INFO: Got endpoints: latency-svc-bkqqz [875.207624ms] Jun 22 13:03:59.385: INFO: Created: latency-svc-zgwqt Jun 22 13:03:59.400: INFO: Got endpoints: latency-svc-zgwqt [819.097363ms] Jun 22 13:03:59.445: INFO: Created: latency-svc-w6kb7 Jun 22 13:03:59.461: INFO: Got endpoints: latency-svc-w6kb7 [868.899162ms] Jun 22 13:03:59.510: INFO: Created: latency-svc-qn5cf Jun 22 13:03:59.515: INFO: Got endpoints: latency-svc-qn5cf [874.207632ms] Jun 22 13:03:59.532: INFO: Created: latency-svc-8lks6 Jun 22 13:03:59.549: INFO: Got endpoints: latency-svc-8lks6 [878.40661ms] Jun 22 13:03:59.570: INFO: Created: latency-svc-4mksz Jun 22 13:03:59.582: INFO: Got endpoints: latency-svc-4mksz [813.96482ms] Jun 22 13:03:59.659: INFO: Created: latency-svc-6v6nv Jun 22 13:03:59.685: INFO: Created: latency-svc-qfd5q Jun 22 13:03:59.686: INFO: Got endpoints: latency-svc-6v6nv [833.902536ms] Jun 22 13:03:59.696: INFO: Got endpoints: latency-svc-qfd5q [758.376643ms] Jun 22 13:03:59.718: INFO: Created: latency-svc-xsvrk Jun 22 13:03:59.733: INFO: Got endpoints: latency-svc-xsvrk [742.285658ms] Jun 22 13:03:59.755: INFO: Created: latency-svc-7kvmj Jun 22 13:03:59.796: INFO: Got endpoints: latency-svc-7kvmj [769.892677ms] Jun 22 13:03:59.808: INFO: Created: latency-svc-575vm Jun 22 13:03:59.823: INFO: Got endpoints: latency-svc-575vm [742.23626ms] Jun 22 13:03:59.841: INFO: Created: latency-svc-hhdhq Jun 22 13:03:59.854: INFO: Got endpoints: latency-svc-hhdhq [730.968644ms] Jun 22 13:03:59.871: INFO: Created: latency-svc-7rnfw Jun 22 13:03:59.884: INFO: Got endpoints: latency-svc-7rnfw [724.274954ms] Jun 22 13:03:59.947: INFO: Created: latency-svc-l94mg Jun 22 13:03:59.950: INFO: Got endpoints: latency-svc-l94mg [707.153054ms] Jun 22 13:03:59.976: INFO: Created: latency-svc-272xh Jun 22 13:03:59.993: INFO: Got endpoints: latency-svc-272xh [681.391801ms] Jun 22 13:04:00.018: INFO: Created: latency-svc-mdrbs Jun 22 13:04:00.036: INFO: Got endpoints: latency-svc-mdrbs [664.657417ms] Jun 22 13:04:00.188: INFO: Created: latency-svc-zm5wf Jun 22 13:04:00.189: INFO: Got endpoints: latency-svc-zm5wf [789.010742ms] Jun 22 13:04:00.253: INFO: Created: latency-svc-j4q69 Jun 22 13:04:00.263: INFO: Got endpoints: latency-svc-j4q69 [802.189461ms] Jun 22 13:04:00.360: INFO: Created: latency-svc-pzkdf Jun 22 13:04:00.399: INFO: Got endpoints: latency-svc-pzkdf [883.48015ms] Jun 22 13:04:00.400: INFO: Created: latency-svc-4d55n Jun 22 13:04:00.408: INFO: Got endpoints: latency-svc-4d55n [858.615369ms] Jun 22 13:04:00.426: INFO: Created: latency-svc-9wljg Jun 22 13:04:00.438: INFO: Got endpoints: latency-svc-9wljg [856.183933ms] Jun 22 13:04:00.515: INFO: Created: latency-svc-7x7fp Jun 22 13:04:00.554: INFO: Got endpoints: latency-svc-7x7fp [868.878926ms] Jun 22 13:04:00.555: INFO: Created: latency-svc-mbcp6 Jun 22 13:04:00.571: INFO: Got endpoints: latency-svc-mbcp6 [874.791824ms] Jun 22 13:04:00.603: INFO: Created: latency-svc-zw45w Jun 22 13:04:00.653: INFO: Got endpoints: latency-svc-zw45w [920.02315ms] Jun 22 13:04:00.672: INFO: Created: latency-svc-952n7 Jun 22 13:04:00.692: INFO: Got endpoints: latency-svc-952n7 [895.61659ms] Jun 22 13:04:00.714: INFO: Created: latency-svc-h8d2c Jun 22 13:04:00.728: INFO: Got endpoints: latency-svc-h8d2c [904.248949ms] Jun 22 13:04:00.755: INFO: Created: latency-svc-ljgm8 Jun 22 13:04:00.814: INFO: Got endpoints: latency-svc-ljgm8 [960.042457ms] Jun 22 13:04:00.819: INFO: Created: latency-svc-fn9dq Jun 22 13:04:00.837: INFO: Got endpoints: latency-svc-fn9dq [952.801579ms] Jun 22 13:04:00.867: INFO: Created: latency-svc-ck9bh Jun 22 13:04:00.879: INFO: Got endpoints: latency-svc-ck9bh [929.549556ms] Jun 22 13:04:00.912: INFO: Created: latency-svc-zn44n Jun 22 13:04:00.958: INFO: Got endpoints: latency-svc-zn44n [965.243011ms] Jun 22 13:04:00.969: INFO: Created: latency-svc-tt2ng Jun 22 13:04:00.982: INFO: Got endpoints: latency-svc-tt2ng [945.932196ms] Jun 22 13:04:01.005: INFO: Created: latency-svc-k88tw Jun 22 13:04:01.019: INFO: Got endpoints: latency-svc-k88tw [829.842144ms] Jun 22 13:04:01.053: INFO: Created: latency-svc-6htwr Jun 22 13:04:01.096: INFO: Got endpoints: latency-svc-6htwr [832.133838ms] Jun 22 13:04:01.110: INFO: Created: latency-svc-rvngx Jun 22 13:04:01.126: INFO: Got endpoints: latency-svc-rvngx [727.484271ms] Jun 22 13:04:01.146: INFO: Created: latency-svc-597db Jun 22 13:04:01.163: INFO: Got endpoints: latency-svc-597db [755.094243ms] Jun 22 13:04:01.191: INFO: Created: latency-svc-5cjj9 Jun 22 13:04:01.239: INFO: Got endpoints: latency-svc-5cjj9 [801.023387ms] Jun 22 13:04:01.244: INFO: Created: latency-svc-l84s2 Jun 22 13:04:01.267: INFO: Got endpoints: latency-svc-l84s2 [711.894041ms] Jun 22 13:04:01.284: INFO: Created: latency-svc-79n78 Jun 22 13:04:01.296: INFO: Got endpoints: latency-svc-79n78 [724.49007ms] Jun 22 13:04:01.314: INFO: Created: latency-svc-n7dt4 Jun 22 13:04:01.326: INFO: Got endpoints: latency-svc-n7dt4 [672.978324ms] Jun 22 13:04:01.414: INFO: Created: latency-svc-tx29k Jun 22 13:04:01.422: INFO: Got endpoints: latency-svc-tx29k [730.354037ms] Jun 22 13:04:01.455: INFO: Created: latency-svc-5k2nc Jun 22 13:04:01.471: INFO: Got endpoints: latency-svc-5k2nc [742.972578ms] Jun 22 13:04:01.490: INFO: Created: latency-svc-6cxp8 Jun 22 13:04:01.501: INFO: Got endpoints: latency-svc-6cxp8 [686.593401ms] Jun 22 13:04:01.557: INFO: Created: latency-svc-tzp5s Jun 22 13:04:01.560: INFO: Got endpoints: latency-svc-tzp5s [723.002872ms] Jun 22 13:04:01.584: INFO: Created: latency-svc-j4bsw Jun 22 13:04:01.598: INFO: Got endpoints: latency-svc-j4bsw [718.414507ms] Jun 22 13:04:01.620: INFO: Created: latency-svc-n9kb7 Jun 22 13:04:01.635: INFO: Got endpoints: latency-svc-n9kb7 [676.433619ms] Jun 22 13:04:01.652: INFO: Created: latency-svc-lk2cw Jun 22 13:04:01.701: INFO: Got endpoints: latency-svc-lk2cw [719.34602ms] Jun 22 13:04:01.719: INFO: Created: latency-svc-xwmlb Jun 22 13:04:01.731: INFO: Got endpoints: latency-svc-xwmlb [711.125579ms] Jun 22 13:04:01.759: INFO: Created: latency-svc-w9snd Jun 22 13:04:01.773: INFO: Got endpoints: latency-svc-w9snd [677.805881ms] Jun 22 13:04:01.794: INFO: Created: latency-svc-mfdb4 Jun 22 13:04:01.844: INFO: Got endpoints: latency-svc-mfdb4 [717.696585ms] Jun 22 13:04:01.872: INFO: Created: latency-svc-vrp68 Jun 22 13:04:01.888: INFO: Got endpoints: latency-svc-vrp68 [724.911736ms] Jun 22 13:04:01.904: INFO: Created: latency-svc-l82x9 Jun 22 13:04:01.918: INFO: Got endpoints: latency-svc-l82x9 [678.756456ms] Jun 22 13:04:01.941: INFO: Created: latency-svc-xjcgr Jun 22 13:04:01.988: INFO: Got endpoints: latency-svc-xjcgr [721.596452ms] Jun 22 13:04:02.016: INFO: Created: latency-svc-nkrxb Jun 22 13:04:02.034: INFO: Got endpoints: latency-svc-nkrxb [737.835512ms] Jun 22 13:04:02.082: INFO: Created: latency-svc-zqtds Jun 22 13:04:02.131: INFO: Got endpoints: latency-svc-zqtds [805.242725ms] Jun 22 13:04:02.161: INFO: Created: latency-svc-hfflr Jun 22 13:04:02.168: INFO: Got endpoints: latency-svc-hfflr [745.62125ms] Jun 22 13:04:02.196: INFO: Created: latency-svc-fcc9z Jun 22 13:04:02.226: INFO: Got endpoints: latency-svc-fcc9z [754.97662ms] Jun 22 13:04:02.330: INFO: Created: latency-svc-86ctw Jun 22 13:04:02.348: INFO: Got endpoints: latency-svc-86ctw [847.148015ms] Jun 22 13:04:02.392: INFO: Created: latency-svc-l4cj9 Jun 22 13:04:02.400: INFO: Got endpoints: latency-svc-l4cj9 [840.230871ms] Jun 22 13:04:02.474: INFO: Created: latency-svc-6tj5w Jun 22 13:04:02.476: INFO: Got endpoints: latency-svc-6tj5w [878.673139ms] Jun 22 13:04:02.508: INFO: Created: latency-svc-bmlqg Jun 22 13:04:02.521: INFO: Got endpoints: latency-svc-bmlqg [886.341698ms] Jun 22 13:04:02.547: INFO: Created: latency-svc-fqw65 Jun 22 13:04:02.557: INFO: Got endpoints: latency-svc-fqw65 [856.281765ms] Jun 22 13:04:02.624: INFO: Created: latency-svc-2qg2k Jun 22 13:04:02.645: INFO: Got endpoints: latency-svc-2qg2k [914.840581ms] Jun 22 13:04:02.671: INFO: Created: latency-svc-wdlb2 Jun 22 13:04:02.696: INFO: Got endpoints: latency-svc-wdlb2 [922.571487ms] Jun 22 13:04:02.857: INFO: Created: latency-svc-qgq6h Jun 22 13:04:02.861: INFO: Got endpoints: latency-svc-qgq6h [1.017119052s] Jun 22 13:04:02.892: INFO: Created: latency-svc-s5l2x Jun 22 13:04:02.924: INFO: Got endpoints: latency-svc-s5l2x [1.036478805s] Jun 22 13:04:02.994: INFO: Created: latency-svc-22xrv Jun 22 13:04:03.008: INFO: Got endpoints: latency-svc-22xrv [1.089802895s] Jun 22 13:04:03.030: INFO: Created: latency-svc-q7l5g Jun 22 13:04:03.144: INFO: Got endpoints: latency-svc-q7l5g [1.155455163s] Jun 22 13:04:03.180: INFO: Created: latency-svc-495vs Jun 22 13:04:03.195: INFO: Got endpoints: latency-svc-495vs [1.160984391s] Jun 22 13:04:03.216: INFO: Created: latency-svc-mm8n7 Jun 22 13:04:03.306: INFO: Got endpoints: latency-svc-mm8n7 [1.174124204s] Jun 22 13:04:03.308: INFO: Created: latency-svc-k9t75 Jun 22 13:04:03.338: INFO: Got endpoints: latency-svc-k9t75 [1.170224886s] Jun 22 13:04:03.366: INFO: Created: latency-svc-mbpv5 Jun 22 13:04:03.449: INFO: Got endpoints: latency-svc-mbpv5 [1.22340643s] Jun 22 13:04:03.462: INFO: Created: latency-svc-bq5cj Jun 22 13:04:03.513: INFO: Got endpoints: latency-svc-bq5cj [1.164829897s] Jun 22 13:04:03.549: INFO: Created: latency-svc-5skgq Jun 22 13:04:03.593: INFO: Got endpoints: latency-svc-5skgq [1.192748216s] Jun 22 13:04:03.606: INFO: Created: latency-svc-77pvs Jun 22 13:04:03.623: INFO: Got endpoints: latency-svc-77pvs [1.146207321s] Jun 22 13:04:03.642: INFO: Created: latency-svc-t2rq8 Jun 22 13:04:03.686: INFO: Got endpoints: latency-svc-t2rq8 [1.165280584s] Jun 22 13:04:03.741: INFO: Created: latency-svc-fp2jh Jun 22 13:04:03.771: INFO: Got endpoints: latency-svc-fp2jh [1.213255532s] Jun 22 13:04:03.804: INFO: Created: latency-svc-r5pb8 Jun 22 13:04:03.828: INFO: Got endpoints: latency-svc-r5pb8 [1.182161934s] Jun 22 13:04:03.881: INFO: Created: latency-svc-jz4x2 Jun 22 13:04:03.883: INFO: Got endpoints: latency-svc-jz4x2 [1.186845456s] Jun 22 13:04:03.939: INFO: Created: latency-svc-4zdhb Jun 22 13:04:03.951: INFO: Got endpoints: latency-svc-4zdhb [1.089408385s] Jun 22 13:04:03.978: INFO: Created: latency-svc-tnbr9 Jun 22 13:04:04.012: INFO: Got endpoints: latency-svc-tnbr9 [1.087605784s] Jun 22 13:04:04.026: INFO: Created: latency-svc-s5jk2 Jun 22 13:04:04.035: INFO: Got endpoints: latency-svc-s5jk2 [1.026761835s] Jun 22 13:04:04.072: INFO: Created: latency-svc-s46nt Jun 22 13:04:04.087: INFO: Got endpoints: latency-svc-s46nt [943.626302ms] Jun 22 13:04:04.150: INFO: Created: latency-svc-7jbht Jun 22 13:04:04.153: INFO: Got endpoints: latency-svc-7jbht [958.145524ms] Jun 22 13:04:04.206: INFO: Created: latency-svc-w4v9x Jun 22 13:04:04.216: INFO: Got endpoints: latency-svc-w4v9x [910.45303ms] Jun 22 13:04:04.236: INFO: Created: latency-svc-ccb8x Jun 22 13:04:04.270: INFO: Got endpoints: latency-svc-ccb8x [931.316711ms] Jun 22 13:04:04.324: INFO: Created: latency-svc-685gv Jun 22 13:04:04.332: INFO: Got endpoints: latency-svc-685gv [882.27056ms] Jun 22 13:04:04.414: INFO: Created: latency-svc-gnwd9 Jun 22 13:04:04.418: INFO: Got endpoints: latency-svc-gnwd9 [904.50282ms] Jun 22 13:04:04.464: INFO: Created: latency-svc-25rmr Jun 22 13:04:04.479: INFO: Got endpoints: latency-svc-25rmr [885.730494ms] Jun 22 13:04:04.587: INFO: Created: latency-svc-kqfpl Jun 22 13:04:04.590: INFO: Got endpoints: latency-svc-kqfpl [967.061875ms] Jun 22 13:04:04.646: INFO: Created: latency-svc-s87tk Jun 22 13:04:04.659: INFO: Got endpoints: latency-svc-s87tk [972.824298ms] Jun 22 13:04:04.682: INFO: Created: latency-svc-xvmsn Jun 22 13:04:04.738: INFO: Got endpoints: latency-svc-xvmsn [966.823494ms] Jun 22 13:04:04.760: INFO: Created: latency-svc-r62r9 Jun 22 13:04:04.774: INFO: Got endpoints: latency-svc-r62r9 [945.935495ms] Jun 22 13:04:04.796: INFO: Created: latency-svc-jbtw9 Jun 22 13:04:04.830: INFO: Got endpoints: latency-svc-jbtw9 [946.858772ms] Jun 22 13:04:04.904: INFO: Created: latency-svc-rjtmj Jun 22 13:04:04.907: INFO: Got endpoints: latency-svc-rjtmj [956.187301ms] Jun 22 13:04:04.946: INFO: Created: latency-svc-p6pfg Jun 22 13:04:04.960: INFO: Got endpoints: latency-svc-p6pfg [948.10653ms] Jun 22 13:04:04.992: INFO: Created: latency-svc-wclft Jun 22 13:04:05.036: INFO: Got endpoints: latency-svc-wclft [1.00086157s] Jun 22 13:04:05.058: INFO: Created: latency-svc-6llxx Jun 22 13:04:05.075: INFO: Got endpoints: latency-svc-6llxx [987.812987ms] Jun 22 13:04:05.095: INFO: Created: latency-svc-jsqkk Jun 22 13:04:05.106: INFO: Got endpoints: latency-svc-jsqkk [952.622218ms] Jun 22 13:04:05.126: INFO: Created: latency-svc-pg5kk Jun 22 13:04:05.167: INFO: Got endpoints: latency-svc-pg5kk [951.300041ms] Jun 22 13:04:05.174: INFO: Created: latency-svc-q8t4d Jun 22 13:04:05.190: INFO: Got endpoints: latency-svc-q8t4d [920.36396ms] Jun 22 13:04:05.214: INFO: Created: latency-svc-jnmrf Jun 22 13:04:05.226: INFO: Got endpoints: latency-svc-jnmrf [894.584039ms] Jun 22 13:04:05.253: INFO: Created: latency-svc-sqfj6 Jun 22 13:04:05.342: INFO: Got endpoints: latency-svc-sqfj6 [923.828463ms] Jun 22 13:04:05.344: INFO: Created: latency-svc-phgpq Jun 22 13:04:05.354: INFO: Got endpoints: latency-svc-phgpq [874.735053ms] Jun 22 13:04:05.379: INFO: Created: latency-svc-g2hk4 Jun 22 13:04:05.389: INFO: Got endpoints: latency-svc-g2hk4 [799.528088ms] Jun 22 13:04:05.412: INFO: Created: latency-svc-gncjk Jun 22 13:04:05.426: INFO: Got endpoints: latency-svc-gncjk [766.666718ms] Jun 22 13:04:05.486: INFO: Created: latency-svc-6tlxr Jun 22 13:04:05.492: INFO: Got endpoints: latency-svc-6tlxr [754.293694ms] Jun 22 13:04:05.519: INFO: Created: latency-svc-ffnrf Jun 22 13:04:05.528: INFO: Got endpoints: latency-svc-ffnrf [754.488834ms] Jun 22 13:04:05.553: INFO: Created: latency-svc-9hd4b Jun 22 13:04:05.564: INFO: Got endpoints: latency-svc-9hd4b [734.474082ms] Jun 22 13:04:05.642: INFO: Created: latency-svc-p5xls Jun 22 13:04:05.644: INFO: Got endpoints: latency-svc-p5xls [736.961221ms] Jun 22 13:04:05.706: INFO: Created: latency-svc-57fdw Jun 22 13:04:05.721: INFO: Got endpoints: latency-svc-57fdw [760.96668ms] Jun 22 13:04:05.779: INFO: Created: latency-svc-rhb7d Jun 22 13:04:05.781: INFO: Got endpoints: latency-svc-rhb7d [745.384042ms] Jun 22 13:04:05.811: INFO: Created: latency-svc-jbh5x Jun 22 13:04:05.824: INFO: Got endpoints: latency-svc-jbh5x [748.365471ms] Jun 22 13:04:05.847: INFO: Created: latency-svc-dmk2x Jun 22 13:04:05.874: INFO: Got endpoints: latency-svc-dmk2x [768.006448ms] Jun 22 13:04:05.923: INFO: Created: latency-svc-9mvrl Jun 22 13:04:05.932: INFO: Got endpoints: latency-svc-9mvrl [764.744518ms] Jun 22 13:04:05.954: INFO: Created: latency-svc-khbvz Jun 22 13:04:05.969: INFO: Got endpoints: latency-svc-khbvz [778.977028ms] Jun 22 13:04:05.990: INFO: Created: latency-svc-7lg6t Jun 22 13:04:06.008: INFO: Got endpoints: latency-svc-7lg6t [781.784152ms] Jun 22 13:04:06.067: INFO: Created: latency-svc-x4sfx Jun 22 13:04:06.071: INFO: Got endpoints: latency-svc-x4sfx [729.459284ms] Jun 22 13:04:06.096: INFO: Created: latency-svc-l8kng Jun 22 13:04:06.108: INFO: Got endpoints: latency-svc-l8kng [753.958461ms] Jun 22 13:04:06.126: INFO: Created: latency-svc-z9q54 Jun 22 13:04:06.138: INFO: Got endpoints: latency-svc-z9q54 [748.836216ms] Jun 22 13:04:06.158: INFO: Created: latency-svc-sq9vz Jun 22 13:04:06.198: INFO: Got endpoints: latency-svc-sq9vz [771.758616ms] Jun 22 13:04:06.207: INFO: Created: latency-svc-l5dzc Jun 22 13:04:06.223: INFO: Got endpoints: latency-svc-l5dzc [730.996991ms] Jun 22 13:04:06.246: INFO: Created: latency-svc-vbnbl Jun 22 13:04:06.259: INFO: Got endpoints: latency-svc-vbnbl [730.985041ms] Jun 22 13:04:06.276: INFO: Created: latency-svc-jv7ps Jun 22 13:04:06.295: INFO: Got endpoints: latency-svc-jv7ps [730.765611ms] Jun 22 13:04:06.356: INFO: Created: latency-svc-6d76k Jun 22 13:04:06.386: INFO: Got endpoints: latency-svc-6d76k [742.005028ms] Jun 22 13:04:06.420: INFO: Created: latency-svc-sh57x Jun 22 13:04:06.434: INFO: Got endpoints: latency-svc-sh57x [712.951447ms] Jun 22 13:04:06.506: INFO: Created: latency-svc-4pqw6 Jun 22 13:04:06.506: INFO: Got endpoints: latency-svc-4pqw6 [724.987563ms] Jun 22 13:04:06.567: INFO: Created: latency-svc-wx794 Jun 22 13:04:06.578: INFO: Got endpoints: latency-svc-wx794 [754.506242ms] Jun 22 13:04:06.653: INFO: Created: latency-svc-pxl9l Jun 22 13:04:06.655: INFO: Got endpoints: latency-svc-pxl9l [781.537712ms] Jun 22 13:04:06.655: INFO: Latencies: [47.115165ms 83.269509ms 146.448121ms 198.74062ms 335.082961ms 359.980938ms 408.621665ms 487.819796ms 517.229319ms 568.482321ms 643.597387ms 664.657417ms 672.978324ms 676.433619ms 677.805881ms 678.756456ms 681.391801ms 686.593401ms 698.19985ms 707.153054ms 711.125579ms 711.894041ms 712.951447ms 717.696585ms 718.414507ms 719.34602ms 721.596452ms 723.002872ms 724.274954ms 724.49007ms 724.911736ms 724.987563ms 727.484271ms 729.459284ms 730.354037ms 730.765611ms 730.968644ms 730.985041ms 730.996991ms 734.474082ms 736.961221ms 737.835512ms 742.005028ms 742.23626ms 742.285658ms 742.972578ms 745.384042ms 745.62125ms 748.365471ms 748.836216ms 753.958461ms 754.293694ms 754.488834ms 754.506242ms 754.97662ms 755.094243ms 758.376643ms 760.96668ms 764.744518ms 766.666718ms 768.006448ms 769.892677ms 771.758616ms 778.977028ms 781.537712ms 781.784152ms 789.010742ms 790.817934ms 790.909525ms 799.528088ms 801.023387ms 802.189461ms 805.242725ms 808.09311ms 808.784703ms 813.167282ms 813.96482ms 819.097363ms 820.504874ms 821.189815ms 826.827192ms 829.842144ms 831.765152ms 832.133838ms 833.902536ms 840.230871ms 844.417771ms 846.097393ms 847.148015ms 850.67591ms 856.183933ms 856.281765ms 858.615369ms 862.760809ms 864.301715ms 868.878926ms 868.899162ms 871.922266ms 874.207632ms 874.735053ms 874.759326ms 874.791824ms 875.207624ms 878.40661ms 878.673139ms 878.874358ms 882.27056ms 883.48015ms 885.730494ms 886.310411ms 886.341698ms 886.878366ms 894.584039ms 895.61659ms 904.248949ms 904.50282ms 910.45303ms 910.661681ms 910.706495ms 914.452012ms 914.840581ms 915.809093ms 915.877516ms 916.131043ms 916.172568ms 920.02315ms 920.36396ms 922.571487ms 923.066512ms 923.828463ms 927.351063ms 928.421372ms 928.640659ms 929.549556ms 931.316711ms 935.411221ms 943.626302ms 945.932196ms 945.935495ms 946.858772ms 948.10653ms 948.615492ms 951.300041ms 952.622218ms 952.801579ms 956.187301ms 958.145524ms 958.273588ms 960.042457ms 964.04904ms 964.071172ms 965.243011ms 966.823494ms 967.061875ms 968.925041ms 972.824298ms 978.505985ms 987.812987ms 987.990957ms 993.104281ms 1.000266599s 1.00086157s 1.007401763s 1.017119052s 1.018864898s 1.026761835s 1.030733712s 1.036478805s 1.06005951s 1.087605784s 1.089408385s 1.089802895s 1.146207321s 1.155455163s 1.160984391s 1.164829897s 1.165280584s 1.170224886s 1.174124204s 1.182161934s 1.186845456s 1.192748216s 1.213255532s 1.22340643s 1.44823886s 1.519803184s 2.053100913s 2.076972724s 2.083814012s 2.084059248s 2.085560924s 2.086240247s 2.090037197s 2.09092067s 2.107820181s 2.113759728s 2.124286596s 2.137493967s 2.143885615s 2.144873218s] Jun 22 13:04:06.656: INFO: 50 %ile: 874.759326ms Jun 22 13:04:06.656: INFO: 90 %ile: 1.186845456s Jun 22 13:04:06.656: INFO: 99 %ile: 2.143885615s Jun 22 13:04:06.656: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:04:06.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8169" for this suite. Jun 22 13:04:28.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:04:28.795: INFO: namespace svc-latency-8169 deletion completed in 22.111091635s • [SLOW TEST:39.345 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:04:28.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-87388c43-9a62-46bb-9dfd-9686f9745195 in namespace container-probe-2033 Jun 22 13:04:32.947: INFO: Started pod test-webserver-87388c43-9a62-46bb-9dfd-9686f9745195 in namespace container-probe-2033 STEP: checking the pod's current state and verifying that restartCount is present Jun 22 13:04:32.951: INFO: Initial restart count of pod test-webserver-87388c43-9a62-46bb-9dfd-9686f9745195 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:08:33.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2033" for this suite. Jun 22 13:08:39.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:08:39.805: INFO: namespace container-probe-2033 deletion completed in 6.170529968s • [SLOW TEST:251.009 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:08:39.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:09:06.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6865" for this suite. Jun 22 13:09:12.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:09:12.130: INFO: namespace namespaces-6865 deletion completed in 6.110185561s STEP: Destroying namespace "nsdeletetest-9462" for this suite. Jun 22 13:09:12.133: INFO: Namespace nsdeletetest-9462 was already deleted STEP: Destroying namespace "nsdeletetest-9031" for this suite. Jun 22 13:09:18.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:09:18.247: INFO: namespace nsdeletetest-9031 deletion completed in 6.114540756s • [SLOW TEST:38.442 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:09:18.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3236, will wait for the garbage collector to delete the pods Jun 22 13:09:24.382: INFO: Deleting Job.batch foo took: 7.107794ms Jun 22 13:09:24.682: INFO: Terminating Job.batch foo pods took: 300.272741ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:10:02.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3236" for this suite. Jun 22 13:10:08.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:10:08.410: INFO: namespace job-3236 deletion completed in 6.11857509s • [SLOW TEST:50.162 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:10:08.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-f11274e1-92d9-4910-baf3-c74a7927a84b in namespace container-probe-2340 Jun 22 13:10:12.519: INFO: Started pod liveness-f11274e1-92d9-4910-baf3-c74a7927a84b in namespace container-probe-2340 STEP: checking the pod's current state and verifying that restartCount is present Jun 22 13:10:12.522: INFO: Initial restart count of pod liveness-f11274e1-92d9-4910-baf3-c74a7927a84b is 0 Jun 22 13:10:34.574: INFO: Restart count of pod container-probe-2340/liveness-f11274e1-92d9-4910-baf3-c74a7927a84b is now 1 (22.051586351s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:10:34.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2340" for this suite. Jun 22 13:10:40.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:10:40.716: INFO: namespace container-probe-2340 deletion completed in 6.092527891s • [SLOW TEST:32.306 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:10:40.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4881/configmap-test-c44cada7-1866-4959-b2b2-594a2a05fe75 STEP: Creating a pod to test consume configMaps Jun 22 13:10:40.825: INFO: Waiting up to 5m0s for pod "pod-configmaps-d6e37d0e-83f4-4716-bc4c-30126491bcd4" in namespace "configmap-4881" to be "success or failure" Jun 22 13:10:40.830: INFO: Pod "pod-configmaps-d6e37d0e-83f4-4716-bc4c-30126491bcd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.686039ms Jun 22 13:10:42.835: INFO: Pod "pod-configmaps-d6e37d0e-83f4-4716-bc4c-30126491bcd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009718574s Jun 22 13:10:44.839: INFO: Pod "pod-configmaps-d6e37d0e-83f4-4716-bc4c-30126491bcd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014007915s STEP: Saw pod success Jun 22 13:10:44.839: INFO: Pod "pod-configmaps-d6e37d0e-83f4-4716-bc4c-30126491bcd4" satisfied condition "success or failure" Jun 22 13:10:44.843: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d6e37d0e-83f4-4716-bc4c-30126491bcd4 container env-test: STEP: delete the pod Jun 22 13:10:44.859: INFO: Waiting for pod pod-configmaps-d6e37d0e-83f4-4716-bc4c-30126491bcd4 to disappear Jun 22 13:10:44.864: INFO: Pod pod-configmaps-d6e37d0e-83f4-4716-bc4c-30126491bcd4 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:10:44.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4881" for this suite. Jun 22 13:10:51.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:10:51.102: INFO: namespace configmap-4881 deletion completed in 6.234986926s • [SLOW TEST:10.385 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:10:51.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 22 13:10:51.150: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:10:59.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9605" for this suite. Jun 22 13:11:21.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:11:21.864: INFO: namespace init-container-9605 deletion completed in 22.119930959s • [SLOW TEST:30.762 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:11:21.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-a2cef40f-f8e8-46dc-b4ef-7fd381cabc7e [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:11:21.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8627" for this suite. Jun 22 13:11:27.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:11:28.053: INFO: namespace configmap-8627 deletion completed in 6.109961372s • [SLOW TEST:6.189 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:11:28.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-7775/secret-test-e26b6f7d-de53-4e67-8cf1-e224eef529fa STEP: Creating a pod to test consume secrets Jun 22 13:11:28.155: INFO: Waiting up to 5m0s for pod "pod-configmaps-3d5612d7-b8bd-4176-a66c-fbc87b6727f9" in namespace "secrets-7775" to be "success or failure" Jun 22 13:11:28.175: INFO: Pod "pod-configmaps-3d5612d7-b8bd-4176-a66c-fbc87b6727f9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.012996ms Jun 22 13:11:30.180: INFO: Pod "pod-configmaps-3d5612d7-b8bd-4176-a66c-fbc87b6727f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024709508s Jun 22 13:11:32.184: INFO: Pod "pod-configmaps-3d5612d7-b8bd-4176-a66c-fbc87b6727f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028979665s STEP: Saw pod success Jun 22 13:11:32.184: INFO: Pod "pod-configmaps-3d5612d7-b8bd-4176-a66c-fbc87b6727f9" satisfied condition "success or failure" Jun 22 13:11:32.188: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-3d5612d7-b8bd-4176-a66c-fbc87b6727f9 container env-test: STEP: delete the pod Jun 22 13:11:32.267: INFO: Waiting for pod pod-configmaps-3d5612d7-b8bd-4176-a66c-fbc87b6727f9 to disappear Jun 22 13:11:32.273: INFO: Pod pod-configmaps-3d5612d7-b8bd-4176-a66c-fbc87b6727f9 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:11:32.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7775" for this suite. Jun 22 13:11:38.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:11:38.365: INFO: namespace secrets-7775 deletion completed in 6.089049203s • [SLOW TEST:10.312 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:11:38.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 22 13:11:42.992: INFO: Successfully updated pod "pod-update-activedeadlineseconds-8832dcb3-050b-4045-9fcf-728b7dfa5d22" Jun 22 13:11:42.992: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-8832dcb3-050b-4045-9fcf-728b7dfa5d22" in namespace "pods-7435" to be "terminated due to deadline exceeded" Jun 22 13:11:43.001: INFO: Pod "pod-update-activedeadlineseconds-8832dcb3-050b-4045-9fcf-728b7dfa5d22": Phase="Running", Reason="", readiness=true. Elapsed: 9.098958ms Jun 22 13:11:45.005: INFO: Pod "pod-update-activedeadlineseconds-8832dcb3-050b-4045-9fcf-728b7dfa5d22": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.013596259s Jun 22 13:11:45.005: INFO: Pod "pod-update-activedeadlineseconds-8832dcb3-050b-4045-9fcf-728b7dfa5d22" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:11:45.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7435" for this suite. Jun 22 13:11:51.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:11:51.104: INFO: namespace pods-7435 deletion completed in 6.093929155s • [SLOW TEST:12.738 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:11:51.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 22 13:11:51.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9054' Jun 22 13:11:53.798: INFO: stderr: "" Jun 22 13:11:53.798: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 22 13:11:54.802: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:11:54.802: INFO: Found 0 / 1 Jun 22 13:11:55.803: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:11:55.803: INFO: Found 0 / 1 Jun 22 13:11:56.803: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:11:56.803: INFO: Found 0 / 1 Jun 22 13:11:57.803: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:11:57.803: INFO: Found 1 / 1 Jun 22 13:11:57.803: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 22 13:11:57.807: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:11:57.807: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 22 13:11:57.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-59phh --namespace=kubectl-9054 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 22 13:11:57.909: INFO: stderr: "" Jun 22 13:11:57.909: INFO: stdout: "pod/redis-master-59phh patched\n" STEP: checking annotations Jun 22 13:11:57.912: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:11:57.912: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:11:57.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9054" for this suite. Jun 22 13:12:19.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:12:20.000: INFO: namespace kubectl-9054 deletion completed in 22.078821692s • [SLOW TEST:28.896 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:12:20.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 13:12:20.048: INFO: Creating ReplicaSet my-hostname-basic-242a307c-a24c-418d-b5c4-d75bdb41adb5 Jun 22 13:12:20.089: INFO: Pod name my-hostname-basic-242a307c-a24c-418d-b5c4-d75bdb41adb5: Found 0 pods out of 1 Jun 22 13:12:25.104: INFO: Pod name my-hostname-basic-242a307c-a24c-418d-b5c4-d75bdb41adb5: Found 1 pods out of 1 Jun 22 13:12:25.104: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-242a307c-a24c-418d-b5c4-d75bdb41adb5" is running Jun 22 13:12:25.107: INFO: Pod "my-hostname-basic-242a307c-a24c-418d-b5c4-d75bdb41adb5-rwhgg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 13:12:20 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 13:12:22 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 13:12:22 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 13:12:20 +0000 UTC Reason: Message:}]) Jun 22 13:12:25.107: INFO: Trying to dial the pod Jun 22 13:12:30.120: INFO: Controller my-hostname-basic-242a307c-a24c-418d-b5c4-d75bdb41adb5: Got expected result from replica 1 [my-hostname-basic-242a307c-a24c-418d-b5c4-d75bdb41adb5-rwhgg]: "my-hostname-basic-242a307c-a24c-418d-b5c4-d75bdb41adb5-rwhgg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:12:30.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2401" for this suite. Jun 22 13:12:36.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:12:36.218: INFO: namespace replicaset-2401 deletion completed in 6.09426461s • [SLOW TEST:16.217 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:12:36.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 22 13:12:40.846: INFO: Successfully updated pod "annotationupdatec766cf61-495f-44b7-8824-5fedad96b799" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:12:42.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6735" for this suite. Jun 22 13:13:04.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:13:05.065: INFO: namespace projected-6735 deletion completed in 22.145248632s • [SLOW TEST:28.845 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:13:05.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6029 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6029 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6029 Jun 22 13:13:05.156: INFO: Found 0 stateful pods, waiting for 1 Jun 22 13:13:15.161: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 22 13:13:15.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6029 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 13:13:15.415: INFO: stderr: "I0622 13:13:15.289604 90 log.go:172] (0xc000a64580) (0xc0005908c0) Create stream\nI0622 13:13:15.289654 90 log.go:172] (0xc000a64580) (0xc0005908c0) Stream added, broadcasting: 1\nI0622 13:13:15.292040 90 log.go:172] (0xc000a64580) Reply frame received for 1\nI0622 13:13:15.292083 90 log.go:172] (0xc000a64580) (0xc000814000) Create stream\nI0622 13:13:15.292102 90 log.go:172] (0xc000a64580) (0xc000814000) Stream added, broadcasting: 3\nI0622 13:13:15.292954 90 log.go:172] (0xc000a64580) Reply frame received for 3\nI0622 13:13:15.292987 90 log.go:172] (0xc000a64580) (0xc000a40000) Create stream\nI0622 13:13:15.293807 90 log.go:172] (0xc000a64580) (0xc000a40000) Stream added, broadcasting: 5\nI0622 13:13:15.295789 90 log.go:172] (0xc000a64580) Reply frame received for 5\nI0622 13:13:15.360396 90 log.go:172] (0xc000a64580) Data frame received for 5\nI0622 13:13:15.360425 90 log.go:172] (0xc000a40000) (5) Data frame handling\nI0622 13:13:15.360442 90 log.go:172] (0xc000a40000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0622 13:13:15.406841 90 log.go:172] (0xc000a64580) Data frame received for 5\nI0622 13:13:15.406876 90 log.go:172] (0xc000a40000) (5) Data frame handling\nI0622 13:13:15.406931 90 log.go:172] (0xc000a64580) Data frame received for 3\nI0622 13:13:15.407072 90 log.go:172] (0xc000814000) (3) Data frame handling\nI0622 13:13:15.407126 90 log.go:172] (0xc000814000) (3) Data frame sent\nI0622 13:13:15.407146 90 log.go:172] (0xc000a64580) Data frame received for 3\nI0622 13:13:15.407157 90 log.go:172] (0xc000814000) (3) Data frame handling\nI0622 13:13:15.409735 90 log.go:172] (0xc000a64580) Data frame received for 1\nI0622 13:13:15.409757 90 log.go:172] (0xc0005908c0) (1) Data frame handling\nI0622 13:13:15.409769 90 log.go:172] (0xc0005908c0) (1) Data frame sent\nI0622 13:13:15.409792 90 log.go:172] (0xc000a64580) (0xc0005908c0) Stream removed, broadcasting: 1\nI0622 13:13:15.409814 90 log.go:172] (0xc000a64580) Go away received\nI0622 13:13:15.410336 90 log.go:172] (0xc000a64580) (0xc0005908c0) Stream removed, broadcasting: 1\nI0622 13:13:15.410359 90 log.go:172] (0xc000a64580) (0xc000814000) Stream removed, broadcasting: 3\nI0622 13:13:15.410375 90 log.go:172] (0xc000a64580) (0xc000a40000) Stream removed, broadcasting: 5\n" Jun 22 13:13:15.416: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 13:13:15.416: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 13:13:15.419: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 22 13:13:25.453: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 13:13:25.453: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 13:13:25.469: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999759s Jun 22 13:13:26.474: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993251709s Jun 22 13:13:27.478: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988277956s Jun 22 13:13:28.483: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984293038s Jun 22 13:13:29.489: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979266061s Jun 22 13:13:30.493: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.974128102s Jun 22 13:13:31.497: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.969247349s Jun 22 13:13:32.502: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.965423283s Jun 22 13:13:33.506: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.96053687s Jun 22 13:13:34.512: INFO: Verifying statefulset ss doesn't scale past 1 for another 956.005295ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6029 Jun 22 13:13:35.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6029 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 13:13:35.717: INFO: stderr: "I0622 13:13:35.650473 111 log.go:172] (0xc000a008f0) (0xc0009f0be0) Create stream\nI0622 13:13:35.650535 111 log.go:172] (0xc000a008f0) (0xc0009f0be0) Stream added, broadcasting: 1\nI0622 13:13:35.654700 111 log.go:172] (0xc000a008f0) Reply frame received for 1\nI0622 13:13:35.654740 111 log.go:172] (0xc000a008f0) (0xc0009f0000) Create stream\nI0622 13:13:35.654753 111 log.go:172] (0xc000a008f0) (0xc0009f0000) Stream added, broadcasting: 3\nI0622 13:13:35.655559 111 log.go:172] (0xc000a008f0) Reply frame received for 3\nI0622 13:13:35.655598 111 log.go:172] (0xc000a008f0) (0xc0009f00a0) Create stream\nI0622 13:13:35.655612 111 log.go:172] (0xc000a008f0) (0xc0009f00a0) Stream added, broadcasting: 5\nI0622 13:13:35.656397 111 log.go:172] (0xc000a008f0) Reply frame received for 5\nI0622 13:13:35.708165 111 log.go:172] (0xc000a008f0) Data frame received for 3\nI0622 13:13:35.708293 111 log.go:172] (0xc0009f0000) (3) Data frame handling\nI0622 13:13:35.708314 111 log.go:172] (0xc0009f0000) (3) Data frame sent\nI0622 13:13:35.708325 111 log.go:172] (0xc000a008f0) Data frame received for 3\nI0622 13:13:35.708332 111 log.go:172] (0xc0009f0000) (3) Data frame handling\nI0622 13:13:35.708373 111 log.go:172] (0xc000a008f0) Data frame received for 5\nI0622 13:13:35.708409 111 log.go:172] (0xc0009f00a0) (5) Data frame handling\nI0622 13:13:35.708423 111 log.go:172] (0xc0009f00a0) (5) Data frame sent\nI0622 13:13:35.708435 111 log.go:172] (0xc000a008f0) Data frame received for 5\nI0622 13:13:35.708451 111 log.go:172] (0xc0009f00a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0622 13:13:35.710256 111 log.go:172] (0xc000a008f0) Data frame received for 1\nI0622 13:13:35.710274 111 log.go:172] (0xc0009f0be0) (1) Data frame handling\nI0622 13:13:35.710287 111 log.go:172] (0xc0009f0be0) (1) Data frame sent\nI0622 13:13:35.710297 111 log.go:172] (0xc000a008f0) (0xc0009f0be0) Stream removed, broadcasting: 1\nI0622 13:13:35.710369 111 log.go:172] (0xc000a008f0) Go away received\nI0622 13:13:35.710644 111 log.go:172] (0xc000a008f0) (0xc0009f0be0) Stream removed, broadcasting: 1\nI0622 13:13:35.710667 111 log.go:172] (0xc000a008f0) (0xc0009f0000) Stream removed, broadcasting: 3\nI0622 13:13:35.710678 111 log.go:172] (0xc000a008f0) (0xc0009f00a0) Stream removed, broadcasting: 5\n" Jun 22 13:13:35.717: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 13:13:35.717: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 13:13:35.721: INFO: Found 1 stateful pods, waiting for 3 Jun 22 13:13:45.726: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 13:13:45.726: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 13:13:45.726: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 22 13:13:45.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6029 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 13:13:45.932: INFO: stderr: "I0622 13:13:45.861578 131 log.go:172] (0xc000930420) (0xc0006ca6e0) Create stream\nI0622 13:13:45.861633 131 log.go:172] (0xc000930420) (0xc0006ca6e0) Stream added, broadcasting: 1\nI0622 13:13:45.864822 131 log.go:172] (0xc000930420) Reply frame received for 1\nI0622 13:13:45.864873 131 log.go:172] (0xc000930420) (0xc0006ca000) Create stream\nI0622 13:13:45.864888 131 log.go:172] (0xc000930420) (0xc0006ca000) Stream added, broadcasting: 3\nI0622 13:13:45.865944 131 log.go:172] (0xc000930420) Reply frame received for 3\nI0622 13:13:45.865988 131 log.go:172] (0xc000930420) (0xc00061a1e0) Create stream\nI0622 13:13:45.866003 131 log.go:172] (0xc000930420) (0xc00061a1e0) Stream added, broadcasting: 5\nI0622 13:13:45.866975 131 log.go:172] (0xc000930420) Reply frame received for 5\nI0622 13:13:45.925279 131 log.go:172] (0xc000930420) Data frame received for 3\nI0622 13:13:45.925301 131 log.go:172] (0xc0006ca000) (3) Data frame handling\nI0622 13:13:45.925310 131 log.go:172] (0xc0006ca000) (3) Data frame sent\nI0622 13:13:45.925337 131 log.go:172] (0xc000930420) Data frame received for 5\nI0622 13:13:45.925343 131 log.go:172] (0xc00061a1e0) (5) Data frame handling\nI0622 13:13:45.925350 131 log.go:172] (0xc00061a1e0) (5) Data frame sent\nI0622 13:13:45.925355 131 log.go:172] (0xc000930420) Data frame received for 5\nI0622 13:13:45.925361 131 log.go:172] (0xc00061a1e0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0622 13:13:45.925602 131 log.go:172] (0xc000930420) Data frame received for 3\nI0622 13:13:45.925632 131 log.go:172] (0xc0006ca000) (3) Data frame handling\nI0622 13:13:45.927788 131 log.go:172] (0xc000930420) Data frame received for 1\nI0622 13:13:45.927807 131 log.go:172] (0xc0006ca6e0) (1) Data frame handling\nI0622 13:13:45.927815 131 log.go:172] (0xc0006ca6e0) (1) Data frame sent\nI0622 13:13:45.927823 131 log.go:172] (0xc000930420) (0xc0006ca6e0) Stream removed, broadcasting: 1\nI0622 13:13:45.927943 131 log.go:172] (0xc000930420) Go away received\nI0622 13:13:45.928058 131 log.go:172] (0xc000930420) (0xc0006ca6e0) Stream removed, broadcasting: 1\nI0622 13:13:45.928078 131 log.go:172] (0xc000930420) (0xc0006ca000) Stream removed, broadcasting: 3\nI0622 13:13:45.928084 131 log.go:172] (0xc000930420) (0xc00061a1e0) Stream removed, broadcasting: 5\n" Jun 22 13:13:45.932: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 13:13:45.932: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 13:13:45.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6029 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 13:13:46.207: INFO: stderr: "I0622 13:13:46.056563 153 log.go:172] (0xc000932420) (0xc0003826e0) Create stream\nI0622 13:13:46.056777 153 log.go:172] (0xc000932420) (0xc0003826e0) Stream added, broadcasting: 1\nI0622 13:13:46.059469 153 log.go:172] (0xc000932420) Reply frame received for 1\nI0622 13:13:46.059520 153 log.go:172] (0xc000932420) (0xc000974000) Create stream\nI0622 13:13:46.059545 153 log.go:172] (0xc000932420) (0xc000974000) Stream added, broadcasting: 3\nI0622 13:13:46.061528 153 log.go:172] (0xc000932420) Reply frame received for 3\nI0622 13:13:46.061593 153 log.go:172] (0xc000932420) (0xc0005ae320) Create stream\nI0622 13:13:46.061626 153 log.go:172] (0xc000932420) (0xc0005ae320) Stream added, broadcasting: 5\nI0622 13:13:46.062908 153 log.go:172] (0xc000932420) Reply frame received for 5\nI0622 13:13:46.154861 153 log.go:172] (0xc000932420) Data frame received for 5\nI0622 13:13:46.154909 153 log.go:172] (0xc0005ae320) (5) Data frame handling\nI0622 13:13:46.154939 153 log.go:172] (0xc0005ae320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0622 13:13:46.200174 153 log.go:172] (0xc000932420) Data frame received for 3\nI0622 13:13:46.200200 153 log.go:172] (0xc000974000) (3) Data frame handling\nI0622 13:13:46.200214 153 log.go:172] (0xc000974000) (3) Data frame sent\nI0622 13:13:46.200220 153 log.go:172] (0xc000932420) Data frame received for 3\nI0622 13:13:46.200225 153 log.go:172] (0xc000974000) (3) Data frame handling\nI0622 13:13:46.200261 153 log.go:172] (0xc000932420) Data frame received for 5\nI0622 13:13:46.200280 153 log.go:172] (0xc0005ae320) (5) Data frame handling\nI0622 13:13:46.201841 153 log.go:172] (0xc000932420) Data frame received for 1\nI0622 13:13:46.201860 153 log.go:172] (0xc0003826e0) (1) Data frame handling\nI0622 13:13:46.201878 153 log.go:172] (0xc0003826e0) (1) Data frame sent\nI0622 13:13:46.201897 153 log.go:172] (0xc000932420) (0xc0003826e0) Stream removed, broadcasting: 1\nI0622 13:13:46.201911 153 log.go:172] (0xc000932420) Go away received\nI0622 13:13:46.202182 153 log.go:172] (0xc000932420) (0xc0003826e0) Stream removed, broadcasting: 1\nI0622 13:13:46.202203 153 log.go:172] (0xc000932420) (0xc000974000) Stream removed, broadcasting: 3\nI0622 13:13:46.202209 153 log.go:172] (0xc000932420) (0xc0005ae320) Stream removed, broadcasting: 5\n" Jun 22 13:13:46.207: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 13:13:46.207: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 13:13:46.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6029 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 13:13:46.463: INFO: stderr: "I0622 13:13:46.328586 176 log.go:172] (0xc000ada630) (0xc0005d8b40) Create stream\nI0622 13:13:46.328659 176 log.go:172] (0xc000ada630) (0xc0005d8b40) Stream added, broadcasting: 1\nI0622 13:13:46.338378 176 log.go:172] (0xc000ada630) Reply frame received for 1\nI0622 13:13:46.338417 176 log.go:172] (0xc000ada630) (0xc0005d8280) Create stream\nI0622 13:13:46.338427 176 log.go:172] (0xc000ada630) (0xc0005d8280) Stream added, broadcasting: 3\nI0622 13:13:46.339216 176 log.go:172] (0xc000ada630) Reply frame received for 3\nI0622 13:13:46.339246 176 log.go:172] (0xc000ada630) (0xc00002e000) Create stream\nI0622 13:13:46.339256 176 log.go:172] (0xc000ada630) (0xc00002e000) Stream added, broadcasting: 5\nI0622 13:13:46.340095 176 log.go:172] (0xc000ada630) Reply frame received for 5\nI0622 13:13:46.418067 176 log.go:172] (0xc000ada630) Data frame received for 5\nI0622 13:13:46.418087 176 log.go:172] (0xc00002e000) (5) Data frame handling\nI0622 13:13:46.418095 176 log.go:172] (0xc00002e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0622 13:13:46.456481 176 log.go:172] (0xc000ada630) Data frame received for 3\nI0622 13:13:46.456507 176 log.go:172] (0xc0005d8280) (3) Data frame handling\nI0622 13:13:46.456523 176 log.go:172] (0xc0005d8280) (3) Data frame sent\nI0622 13:13:46.456531 176 log.go:172] (0xc000ada630) Data frame received for 3\nI0622 13:13:46.456537 176 log.go:172] (0xc0005d8280) (3) Data frame handling\nI0622 13:13:46.457447 176 log.go:172] (0xc000ada630) Data frame received for 5\nI0622 13:13:46.457490 176 log.go:172] (0xc00002e000) (5) Data frame handling\nI0622 13:13:46.458814 176 log.go:172] (0xc000ada630) Data frame received for 1\nI0622 13:13:46.458829 176 log.go:172] (0xc0005d8b40) (1) Data frame handling\nI0622 13:13:46.458837 176 log.go:172] (0xc0005d8b40) (1) Data frame sent\nI0622 13:13:46.458850 176 log.go:172] (0xc000ada630) (0xc0005d8b40) Stream removed, broadcasting: 1\nI0622 13:13:46.458888 176 log.go:172] (0xc000ada630) Go away received\nI0622 13:13:46.459143 176 log.go:172] (0xc000ada630) (0xc0005d8b40) Stream removed, broadcasting: 1\nI0622 13:13:46.459157 176 log.go:172] (0xc000ada630) (0xc0005d8280) Stream removed, broadcasting: 3\nI0622 13:13:46.459166 176 log.go:172] (0xc000ada630) (0xc00002e000) Stream removed, broadcasting: 5\n" Jun 22 13:13:46.463: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 13:13:46.463: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 13:13:46.463: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 13:13:46.470: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 22 13:13:56.479: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 13:13:56.479: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 22 13:13:56.479: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 22 13:13:56.494: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999179s Jun 22 13:13:57.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992103988s Jun 22 13:13:58.504: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98748974s Jun 22 13:13:59.509: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982155218s Jun 22 13:14:00.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976965115s Jun 22 13:14:01.520: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971214155s Jun 22 13:14:02.526: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966287245s Jun 22 13:14:03.544: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960670118s Jun 22 13:14:04.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.94284737s Jun 22 13:14:05.554: INFO: Verifying statefulset ss doesn't scale past 3 for another 938.229703ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6029 Jun 22 13:14:06.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6029 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 13:14:06.829: INFO: stderr: "I0622 13:14:06.707767 196 log.go:172] (0xc00012ab00) (0xc0009e8820) Create stream\nI0622 13:14:06.707818 196 log.go:172] (0xc00012ab00) (0xc0009e8820) Stream added, broadcasting: 1\nI0622 13:14:06.721820 196 log.go:172] (0xc00012ab00) Reply frame received for 1\nI0622 13:14:06.721873 196 log.go:172] (0xc00012ab00) (0xc000a22000) Create stream\nI0622 13:14:06.721885 196 log.go:172] (0xc00012ab00) (0xc000a22000) Stream added, broadcasting: 3\nI0622 13:14:06.722938 196 log.go:172] (0xc00012ab00) Reply frame received for 3\nI0622 13:14:06.722970 196 log.go:172] (0xc00012ab00) (0xc0009e8000) Create stream\nI0622 13:14:06.722984 196 log.go:172] (0xc00012ab00) (0xc0009e8000) Stream added, broadcasting: 5\nI0622 13:14:06.724472 196 log.go:172] (0xc00012ab00) Reply frame received for 5\nI0622 13:14:06.818634 196 log.go:172] (0xc00012ab00) Data frame received for 3\nI0622 13:14:06.818677 196 log.go:172] (0xc000a22000) (3) Data frame handling\nI0622 13:14:06.818710 196 log.go:172] (0xc00012ab00) Data frame received for 5\nI0622 13:14:06.818774 196 log.go:172] (0xc0009e8000) (5) Data frame handling\nI0622 13:14:06.818788 196 log.go:172] (0xc0009e8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0622 13:14:06.818798 196 log.go:172] (0xc00012ab00) Data frame received for 5\nI0622 13:14:06.818824 196 log.go:172] (0xc0009e8000) (5) Data frame handling\nI0622 13:14:06.818846 196 log.go:172] (0xc000a22000) (3) Data frame sent\nI0622 13:14:06.818859 196 log.go:172] (0xc00012ab00) Data frame received for 3\nI0622 13:14:06.818868 196 log.go:172] (0xc000a22000) (3) Data frame handling\nI0622 13:14:06.820562 196 log.go:172] (0xc00012ab00) Data frame received for 1\nI0622 13:14:06.820677 196 log.go:172] (0xc0009e8820) (1) Data frame handling\nI0622 13:14:06.820724 196 log.go:172] (0xc0009e8820) (1) Data frame sent\nI0622 13:14:06.820776 196 log.go:172] (0xc00012ab00) (0xc0009e8820) Stream removed, broadcasting: 1\nI0622 13:14:06.820792 196 log.go:172] (0xc00012ab00) Go away received\nI0622 13:14:06.821062 196 log.go:172] (0xc00012ab00) (0xc0009e8820) Stream removed, broadcasting: 1\nI0622 13:14:06.821079 196 log.go:172] (0xc00012ab00) (0xc000a22000) Stream removed, broadcasting: 3\nI0622 13:14:06.821086 196 log.go:172] (0xc00012ab00) (0xc0009e8000) Stream removed, broadcasting: 5\n" Jun 22 13:14:06.829: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 13:14:06.829: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 13:14:06.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6029 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 13:14:07.027: INFO: stderr: "I0622 13:14:06.971039 218 log.go:172] (0xc000a7a6e0) (0xc000802820) Create stream\nI0622 13:14:06.971100 218 log.go:172] (0xc000a7a6e0) (0xc000802820) Stream added, broadcasting: 1\nI0622 13:14:06.974719 218 log.go:172] (0xc000a7a6e0) Reply frame received for 1\nI0622 13:14:06.974757 218 log.go:172] (0xc000a7a6e0) (0xc000802000) Create stream\nI0622 13:14:06.974776 218 log.go:172] (0xc000a7a6e0) (0xc000802000) Stream added, broadcasting: 3\nI0622 13:14:06.975438 218 log.go:172] (0xc000a7a6e0) Reply frame received for 3\nI0622 13:14:06.975464 218 log.go:172] (0xc000a7a6e0) (0xc000910000) Create stream\nI0622 13:14:06.975472 218 log.go:172] (0xc000a7a6e0) (0xc000910000) Stream added, broadcasting: 5\nI0622 13:14:06.976222 218 log.go:172] (0xc000a7a6e0) Reply frame received for 5\nI0622 13:14:07.020219 218 log.go:172] (0xc000a7a6e0) Data frame received for 5\nI0622 13:14:07.020255 218 log.go:172] (0xc000910000) (5) Data frame handling\nI0622 13:14:07.020266 218 log.go:172] (0xc000910000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0622 13:14:07.020279 218 log.go:172] (0xc000a7a6e0) Data frame received for 3\nI0622 13:14:07.020284 218 log.go:172] (0xc000802000) (3) Data frame handling\nI0622 13:14:07.020290 218 log.go:172] (0xc000802000) (3) Data frame sent\nI0622 13:14:07.020409 218 log.go:172] (0xc000a7a6e0) Data frame received for 5\nI0622 13:14:07.020423 218 log.go:172] (0xc000910000) (5) Data frame handling\nI0622 13:14:07.020440 218 log.go:172] (0xc000a7a6e0) Data frame received for 3\nI0622 13:14:07.020448 218 log.go:172] (0xc000802000) (3) Data frame handling\nI0622 13:14:07.022309 218 log.go:172] (0xc000a7a6e0) Data frame received for 1\nI0622 13:14:07.022337 218 log.go:172] (0xc000802820) (1) Data frame handling\nI0622 13:14:07.022348 218 log.go:172] (0xc000802820) (1) Data frame sent\nI0622 13:14:07.022356 218 log.go:172] (0xc000a7a6e0) (0xc000802820) Stream removed, broadcasting: 1\nI0622 13:14:07.022471 218 log.go:172] (0xc000a7a6e0) Go away received\nI0622 13:14:07.022612 218 log.go:172] (0xc000a7a6e0) (0xc000802820) Stream removed, broadcasting: 1\nI0622 13:14:07.022629 218 log.go:172] (0xc000a7a6e0) (0xc000802000) Stream removed, broadcasting: 3\nI0622 13:14:07.022634 218 log.go:172] (0xc000a7a6e0) (0xc000910000) Stream removed, broadcasting: 5\n" Jun 22 13:14:07.027: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 13:14:07.027: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 13:14:07.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6029 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 13:14:07.234: INFO: stderr: "I0622 13:14:07.152451 238 log.go:172] (0xc000a20420) (0xc0009146e0) Create stream\nI0622 13:14:07.152504 238 log.go:172] (0xc000a20420) (0xc0009146e0) Stream added, broadcasting: 1\nI0622 13:14:07.155620 238 log.go:172] (0xc000a20420) Reply frame received for 1\nI0622 13:14:07.155688 238 log.go:172] (0xc000a20420) (0xc000612280) Create stream\nI0622 13:14:07.155712 238 log.go:172] (0xc000a20420) (0xc000612280) Stream added, broadcasting: 3\nI0622 13:14:07.156773 238 log.go:172] (0xc000a20420) Reply frame received for 3\nI0622 13:14:07.156820 238 log.go:172] (0xc000a20420) (0xc0007f8000) Create stream\nI0622 13:14:07.156837 238 log.go:172] (0xc000a20420) (0xc0007f8000) Stream added, broadcasting: 5\nI0622 13:14:07.158138 238 log.go:172] (0xc000a20420) Reply frame received for 5\nI0622 13:14:07.226199 238 log.go:172] (0xc000a20420) Data frame received for 5\nI0622 13:14:07.226232 238 log.go:172] (0xc0007f8000) (5) Data frame handling\nI0622 13:14:07.226243 238 log.go:172] (0xc0007f8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0622 13:14:07.226264 238 log.go:172] (0xc000a20420) Data frame received for 3\nI0622 13:14:07.226301 238 log.go:172] (0xc000612280) (3) Data frame handling\nI0622 13:14:07.226319 238 log.go:172] (0xc000612280) (3) Data frame sent\nI0622 13:14:07.226333 238 log.go:172] (0xc000a20420) Data frame received for 3\nI0622 13:14:07.226342 238 log.go:172] (0xc000612280) (3) Data frame handling\nI0622 13:14:07.226437 238 log.go:172] (0xc000a20420) Data frame received for 5\nI0622 13:14:07.226466 238 log.go:172] (0xc0007f8000) (5) Data frame handling\nI0622 13:14:07.227356 238 log.go:172] (0xc000a20420) Data frame received for 1\nI0622 13:14:07.227376 238 log.go:172] (0xc0009146e0) (1) Data frame handling\nI0622 13:14:07.227386 238 log.go:172] (0xc0009146e0) (1) Data frame sent\nI0622 13:14:07.227398 238 log.go:172] (0xc000a20420) (0xc0009146e0) Stream removed, broadcasting: 1\nI0622 13:14:07.227503 238 log.go:172] (0xc000a20420) Go away received\nI0622 13:14:07.227705 238 log.go:172] (0xc000a20420) (0xc0009146e0) Stream removed, broadcasting: 1\nI0622 13:14:07.227725 238 log.go:172] (0xc000a20420) (0xc000612280) Stream removed, broadcasting: 3\nI0622 13:14:07.227737 238 log.go:172] (0xc000a20420) (0xc0007f8000) Stream removed, broadcasting: 5\n" Jun 22 13:14:07.234: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 13:14:07.234: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 13:14:07.234: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 22 13:14:37.250: INFO: Deleting all statefulset in ns statefulset-6029 Jun 22 13:14:37.252: INFO: Scaling statefulset ss to 0 Jun 22 13:14:37.259: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 13:14:37.260: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:14:37.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6029" for this suite. Jun 22 13:14:43.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:14:43.355: INFO: namespace statefulset-6029 deletion completed in 6.078387732s • [SLOW TEST:98.290 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:14:43.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 22 13:14:53.508: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:14:53.509: INFO: >>> kubeConfig: /root/.kube/config I0622 13:14:53.548296 7 log.go:172] (0xc000dcc420) (0xc0012a66e0) Create stream I0622 13:14:53.548340 7 log.go:172] (0xc000dcc420) (0xc0012a66e0) Stream added, broadcasting: 1 I0622 13:14:53.550949 7 log.go:172] (0xc000dcc420) Reply frame received for 1 I0622 13:14:53.550989 7 log.go:172] (0xc000dcc420) (0xc000e9a6e0) Create stream I0622 13:14:53.551001 7 log.go:172] (0xc000dcc420) (0xc000e9a6e0) Stream added, broadcasting: 3 I0622 13:14:53.551834 7 log.go:172] (0xc000dcc420) Reply frame received for 3 I0622 13:14:53.551869 7 log.go:172] (0xc000dcc420) (0xc0012a68c0) Create stream I0622 13:14:53.551880 7 log.go:172] (0xc000dcc420) (0xc0012a68c0) Stream added, broadcasting: 5 I0622 13:14:53.552946 7 log.go:172] (0xc000dcc420) Reply frame received for 5 I0622 13:14:53.626282 7 log.go:172] (0xc000dcc420) Data frame received for 5 I0622 13:14:53.626317 7 log.go:172] (0xc0012a68c0) (5) Data frame handling I0622 13:14:53.626337 7 log.go:172] (0xc000dcc420) Data frame received for 3 I0622 13:14:53.626347 7 log.go:172] (0xc000e9a6e0) (3) Data frame handling I0622 13:14:53.626360 7 log.go:172] (0xc000e9a6e0) (3) Data frame sent I0622 13:14:53.626369 7 log.go:172] (0xc000dcc420) Data frame received for 3 I0622 13:14:53.626382 7 log.go:172] (0xc000e9a6e0) (3) Data frame handling I0622 13:14:53.627558 7 log.go:172] (0xc000dcc420) Data frame received for 1 I0622 13:14:53.627590 7 log.go:172] (0xc0012a66e0) (1) Data frame handling I0622 13:14:53.627612 7 log.go:172] (0xc0012a66e0) (1) Data frame sent I0622 13:14:53.627634 7 log.go:172] (0xc000dcc420) (0xc0012a66e0) Stream removed, broadcasting: 1 I0622 13:14:53.627674 7 log.go:172] (0xc000dcc420) Go away received I0622 13:14:53.627772 7 log.go:172] (0xc000dcc420) (0xc0012a66e0) Stream removed, broadcasting: 1 I0622 13:14:53.627788 7 log.go:172] (0xc000dcc420) (0xc000e9a6e0) Stream removed, broadcasting: 3 I0622 13:14:53.627795 7 log.go:172] (0xc000dcc420) (0xc0012a68c0) Stream removed, broadcasting: 5 Jun 22 13:14:53.627: INFO: Exec stderr: "" Jun 22 13:14:53.627: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:14:53.627: INFO: >>> kubeConfig: /root/.kube/config I0622 13:14:53.663165 7 log.go:172] (0xc000dcd340) (0xc0012a6dc0) Create stream I0622 13:14:53.663194 7 log.go:172] (0xc000dcd340) (0xc0012a6dc0) Stream added, broadcasting: 1 I0622 13:14:53.665655 7 log.go:172] (0xc000dcd340) Reply frame received for 1 I0622 13:14:53.665690 7 log.go:172] (0xc000dcd340) (0xc001dea1e0) Create stream I0622 13:14:53.665701 7 log.go:172] (0xc000dcd340) (0xc001dea1e0) Stream added, broadcasting: 3 I0622 13:14:53.666582 7 log.go:172] (0xc000dcd340) Reply frame received for 3 I0622 13:14:53.666627 7 log.go:172] (0xc000dcd340) (0xc0012a6f00) Create stream I0622 13:14:53.666639 7 log.go:172] (0xc000dcd340) (0xc0012a6f00) Stream added, broadcasting: 5 I0622 13:14:53.667396 7 log.go:172] (0xc000dcd340) Reply frame received for 5 I0622 13:14:53.712116 7 log.go:172] (0xc000dcd340) Data frame received for 5 I0622 13:14:53.712137 7 log.go:172] (0xc0012a6f00) (5) Data frame handling I0622 13:14:53.712159 7 log.go:172] (0xc000dcd340) Data frame received for 3 I0622 13:14:53.712170 7 log.go:172] (0xc001dea1e0) (3) Data frame handling I0622 13:14:53.712182 7 log.go:172] (0xc001dea1e0) (3) Data frame sent I0622 13:14:53.712190 7 log.go:172] (0xc000dcd340) Data frame received for 3 I0622 13:14:53.712195 7 log.go:172] (0xc001dea1e0) (3) Data frame handling I0622 13:14:53.713227 7 log.go:172] (0xc000dcd340) Data frame received for 1 I0622 13:14:53.713260 7 log.go:172] (0xc0012a6dc0) (1) Data frame handling I0622 13:14:53.713269 7 log.go:172] (0xc0012a6dc0) (1) Data frame sent I0622 13:14:53.713279 7 log.go:172] (0xc000dcd340) (0xc0012a6dc0) Stream removed, broadcasting: 1 I0622 13:14:53.713293 7 log.go:172] (0xc000dcd340) Go away received I0622 13:14:53.713446 7 log.go:172] (0xc000dcd340) (0xc0012a6dc0) Stream removed, broadcasting: 1 I0622 13:14:53.713468 7 log.go:172] (0xc000dcd340) (0xc001dea1e0) Stream removed, broadcasting: 3 I0622 13:14:53.713478 7 log.go:172] (0xc000dcd340) (0xc0012a6f00) Stream removed, broadcasting: 5 Jun 22 13:14:53.713: INFO: Exec stderr: "" Jun 22 13:14:53.713: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:14:53.713: INFO: >>> kubeConfig: /root/.kube/config I0622 13:14:53.736337 7 log.go:172] (0xc000dcdd90) (0xc0012a74a0) Create stream I0622 13:14:53.736359 7 log.go:172] (0xc000dcdd90) (0xc0012a74a0) Stream added, broadcasting: 1 I0622 13:14:53.738602 7 log.go:172] (0xc000dcdd90) Reply frame received for 1 I0622 13:14:53.738658 7 log.go:172] (0xc000dcdd90) (0xc000e9a780) Create stream I0622 13:14:53.738673 7 log.go:172] (0xc000dcdd90) (0xc000e9a780) Stream added, broadcasting: 3 I0622 13:14:53.739440 7 log.go:172] (0xc000dcdd90) Reply frame received for 3 I0622 13:14:53.739476 7 log.go:172] (0xc000dcdd90) (0xc001c914a0) Create stream I0622 13:14:53.739488 7 log.go:172] (0xc000dcdd90) (0xc001c914a0) Stream added, broadcasting: 5 I0622 13:14:53.740321 7 log.go:172] (0xc000dcdd90) Reply frame received for 5 I0622 13:14:53.813723 7 log.go:172] (0xc000dcdd90) Data frame received for 5 I0622 13:14:53.813758 7 log.go:172] (0xc001c914a0) (5) Data frame handling I0622 13:14:53.813777 7 log.go:172] (0xc000dcdd90) Data frame received for 3 I0622 13:14:53.813790 7 log.go:172] (0xc000e9a780) (3) Data frame handling I0622 13:14:53.813810 7 log.go:172] (0xc000e9a780) (3) Data frame sent I0622 13:14:53.813819 7 log.go:172] (0xc000dcdd90) Data frame received for 3 I0622 13:14:53.813823 7 log.go:172] (0xc000e9a780) (3) Data frame handling I0622 13:14:53.815205 7 log.go:172] (0xc000dcdd90) Data frame received for 1 I0622 13:14:53.815242 7 log.go:172] (0xc0012a74a0) (1) Data frame handling I0622 13:14:53.815263 7 log.go:172] (0xc0012a74a0) (1) Data frame sent I0622 13:14:53.815296 7 log.go:172] (0xc000dcdd90) (0xc0012a74a0) Stream removed, broadcasting: 1 I0622 13:14:53.815437 7 log.go:172] (0xc000dcdd90) (0xc0012a74a0) Stream removed, broadcasting: 1 I0622 13:14:53.815465 7 log.go:172] (0xc000dcdd90) (0xc000e9a780) Stream removed, broadcasting: 3 I0622 13:14:53.815493 7 log.go:172] (0xc000dcdd90) (0xc001c914a0) Stream removed, broadcasting: 5 Jun 22 13:14:53.815: INFO: Exec stderr: "" Jun 22 13:14:53.815: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:14:53.815: INFO: >>> kubeConfig: /root/.kube/config I0622 13:14:53.817364 7 log.go:172] (0xc000dcdd90) Go away received I0622 13:14:53.863118 7 log.go:172] (0xc000de51e0) (0xc001dea640) Create stream I0622 13:14:53.863140 7 log.go:172] (0xc000de51e0) (0xc001dea640) Stream added, broadcasting: 1 I0622 13:14:53.865296 7 log.go:172] (0xc000de51e0) Reply frame received for 1 I0622 13:14:53.865341 7 log.go:172] (0xc000de51e0) (0xc000e9a960) Create stream I0622 13:14:53.865352 7 log.go:172] (0xc000de51e0) (0xc000e9a960) Stream added, broadcasting: 3 I0622 13:14:53.865934 7 log.go:172] (0xc000de51e0) Reply frame received for 3 I0622 13:14:53.865957 7 log.go:172] (0xc000de51e0) (0xc001c915e0) Create stream I0622 13:14:53.865966 7 log.go:172] (0xc000de51e0) (0xc001c915e0) Stream added, broadcasting: 5 I0622 13:14:53.866615 7 log.go:172] (0xc000de51e0) Reply frame received for 5 I0622 13:14:53.923494 7 log.go:172] (0xc000de51e0) Data frame received for 3 I0622 13:14:53.923528 7 log.go:172] (0xc000e9a960) (3) Data frame handling I0622 13:14:53.923537 7 log.go:172] (0xc000e9a960) (3) Data frame sent I0622 13:14:53.923543 7 log.go:172] (0xc000de51e0) Data frame received for 3 I0622 13:14:53.923548 7 log.go:172] (0xc000e9a960) (3) Data frame handling I0622 13:14:53.923566 7 log.go:172] (0xc000de51e0) Data frame received for 5 I0622 13:14:53.923573 7 log.go:172] (0xc001c915e0) (5) Data frame handling I0622 13:14:53.924423 7 log.go:172] (0xc000de51e0) Data frame received for 1 I0622 13:14:53.924442 7 log.go:172] (0xc001dea640) (1) Data frame handling I0622 13:14:53.924455 7 log.go:172] (0xc001dea640) (1) Data frame sent I0622 13:14:53.924537 7 log.go:172] (0xc000de51e0) (0xc001dea640) Stream removed, broadcasting: 1 I0622 13:14:53.924554 7 log.go:172] (0xc000de51e0) Go away received I0622 13:14:53.924628 7 log.go:172] (0xc000de51e0) (0xc001dea640) Stream removed, broadcasting: 1 I0622 13:14:53.924653 7 log.go:172] (0xc000de51e0) (0xc000e9a960) Stream removed, broadcasting: 3 I0622 13:14:53.924668 7 log.go:172] (0xc000de51e0) (0xc001c915e0) Stream removed, broadcasting: 5 Jun 22 13:14:53.924: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 22 13:14:53.924: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:14:53.924: INFO: >>> kubeConfig: /root/.kube/config I0622 13:14:53.950972 7 log.go:172] (0xc002e77e40) (0xc001c91a40) Create stream I0622 13:14:53.951016 7 log.go:172] (0xc002e77e40) (0xc001c91a40) Stream added, broadcasting: 1 I0622 13:14:53.954165 7 log.go:172] (0xc002e77e40) Reply frame received for 1 I0622 13:14:53.954193 7 log.go:172] (0xc002e77e40) (0xc000e9aa00) Create stream I0622 13:14:53.954203 7 log.go:172] (0xc002e77e40) (0xc000e9aa00) Stream added, broadcasting: 3 I0622 13:14:53.955063 7 log.go:172] (0xc002e77e40) Reply frame received for 3 I0622 13:14:53.955097 7 log.go:172] (0xc002e77e40) (0xc0012a75e0) Create stream I0622 13:14:53.955111 7 log.go:172] (0xc002e77e40) (0xc0012a75e0) Stream added, broadcasting: 5 I0622 13:14:53.955895 7 log.go:172] (0xc002e77e40) Reply frame received for 5 I0622 13:14:54.015978 7 log.go:172] (0xc002e77e40) Data frame received for 3 I0622 13:14:54.015999 7 log.go:172] (0xc000e9aa00) (3) Data frame handling I0622 13:14:54.016012 7 log.go:172] (0xc000e9aa00) (3) Data frame sent I0622 13:14:54.016017 7 log.go:172] (0xc002e77e40) Data frame received for 3 I0622 13:14:54.016022 7 log.go:172] (0xc000e9aa00) (3) Data frame handling I0622 13:14:54.016422 7 log.go:172] (0xc002e77e40) Data frame received for 5 I0622 13:14:54.016437 7 log.go:172] (0xc0012a75e0) (5) Data frame handling I0622 13:14:54.017726 7 log.go:172] (0xc002e77e40) Data frame received for 1 I0622 13:14:54.017763 7 log.go:172] (0xc001c91a40) (1) Data frame handling I0622 13:14:54.017780 7 log.go:172] (0xc001c91a40) (1) Data frame sent I0622 13:14:54.017815 7 log.go:172] (0xc002e77e40) (0xc001c91a40) Stream removed, broadcasting: 1 I0622 13:14:54.017846 7 log.go:172] (0xc002e77e40) Go away received I0622 13:14:54.017878 7 log.go:172] (0xc002e77e40) (0xc001c91a40) Stream removed, broadcasting: 1 I0622 13:14:54.017894 7 log.go:172] (0xc002e77e40) (0xc000e9aa00) Stream removed, broadcasting: 3 I0622 13:14:54.017903 7 log.go:172] (0xc002e77e40) (0xc0012a75e0) Stream removed, broadcasting: 5 Jun 22 13:14:54.017: INFO: Exec stderr: "" Jun 22 13:14:54.017: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:14:54.017: INFO: >>> kubeConfig: /root/.kube/config I0622 13:14:54.040964 7 log.go:172] (0xc001f38bb0) (0xc0012a7b80) Create stream I0622 13:14:54.040987 7 log.go:172] (0xc001f38bb0) (0xc0012a7b80) Stream added, broadcasting: 1 I0622 13:14:54.043723 7 log.go:172] (0xc001f38bb0) Reply frame received for 1 I0622 13:14:54.043768 7 log.go:172] (0xc001f38bb0) (0xc002cde640) Create stream I0622 13:14:54.043784 7 log.go:172] (0xc001f38bb0) (0xc002cde640) Stream added, broadcasting: 3 I0622 13:14:54.044645 7 log.go:172] (0xc001f38bb0) Reply frame received for 3 I0622 13:14:54.044677 7 log.go:172] (0xc001f38bb0) (0xc001c91ae0) Create stream I0622 13:14:54.044688 7 log.go:172] (0xc001f38bb0) (0xc001c91ae0) Stream added, broadcasting: 5 I0622 13:14:54.045744 7 log.go:172] (0xc001f38bb0) Reply frame received for 5 I0622 13:14:54.098551 7 log.go:172] (0xc001f38bb0) Data frame received for 5 I0622 13:14:54.098607 7 log.go:172] (0xc001c91ae0) (5) Data frame handling I0622 13:14:54.098644 7 log.go:172] (0xc001f38bb0) Data frame received for 3 I0622 13:14:54.098661 7 log.go:172] (0xc002cde640) (3) Data frame handling I0622 13:14:54.098683 7 log.go:172] (0xc002cde640) (3) Data frame sent I0622 13:14:54.098699 7 log.go:172] (0xc001f38bb0) Data frame received for 3 I0622 13:14:54.098712 7 log.go:172] (0xc002cde640) (3) Data frame handling I0622 13:14:54.100158 7 log.go:172] (0xc001f38bb0) Data frame received for 1 I0622 13:14:54.100182 7 log.go:172] (0xc0012a7b80) (1) Data frame handling I0622 13:14:54.100197 7 log.go:172] (0xc0012a7b80) (1) Data frame sent I0622 13:14:54.100221 7 log.go:172] (0xc001f38bb0) (0xc0012a7b80) Stream removed, broadcasting: 1 I0622 13:14:54.100236 7 log.go:172] (0xc001f38bb0) Go away received I0622 13:14:54.100435 7 log.go:172] (0xc001f38bb0) (0xc0012a7b80) Stream removed, broadcasting: 1 I0622 13:14:54.100469 7 log.go:172] (0xc001f38bb0) (0xc002cde640) Stream removed, broadcasting: 3 I0622 13:14:54.100485 7 log.go:172] (0xc001f38bb0) (0xc001c91ae0) Stream removed, broadcasting: 5 Jun 22 13:14:54.100: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 22 13:14:54.100: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:14:54.100: INFO: >>> kubeConfig: /root/.kube/config I0622 13:14:54.131592 7 log.go:172] (0xc001068c60) (0xc002cde960) Create stream I0622 13:14:54.131624 7 log.go:172] (0xc001068c60) (0xc002cde960) Stream added, broadcasting: 1 I0622 13:14:54.134417 7 log.go:172] (0xc001068c60) Reply frame received for 1 I0622 13:14:54.134454 7 log.go:172] (0xc001068c60) (0xc002cdea00) Create stream I0622 13:14:54.134464 7 log.go:172] (0xc001068c60) (0xc002cdea00) Stream added, broadcasting: 3 I0622 13:14:54.135596 7 log.go:172] (0xc001068c60) Reply frame received for 3 I0622 13:14:54.135633 7 log.go:172] (0xc001068c60) (0xc001dea820) Create stream I0622 13:14:54.135643 7 log.go:172] (0xc001068c60) (0xc001dea820) Stream added, broadcasting: 5 I0622 13:14:54.136556 7 log.go:172] (0xc001068c60) Reply frame received for 5 I0622 13:14:54.190716 7 log.go:172] (0xc001068c60) Data frame received for 5 I0622 13:14:54.190767 7 log.go:172] (0xc001dea820) (5) Data frame handling I0622 13:14:54.190805 7 log.go:172] (0xc001068c60) Data frame received for 3 I0622 13:14:54.190824 7 log.go:172] (0xc002cdea00) (3) Data frame handling I0622 13:14:54.190846 7 log.go:172] (0xc002cdea00) (3) Data frame sent I0622 13:14:54.190864 7 log.go:172] (0xc001068c60) Data frame received for 3 I0622 13:14:54.190878 7 log.go:172] (0xc002cdea00) (3) Data frame handling I0622 13:14:54.192096 7 log.go:172] (0xc001068c60) Data frame received for 1 I0622 13:14:54.192125 7 log.go:172] (0xc002cde960) (1) Data frame handling I0622 13:14:54.192140 7 log.go:172] (0xc002cde960) (1) Data frame sent I0622 13:14:54.192167 7 log.go:172] (0xc001068c60) (0xc002cde960) Stream removed, broadcasting: 1 I0622 13:14:54.192192 7 log.go:172] (0xc001068c60) Go away received I0622 13:14:54.192320 7 log.go:172] (0xc001068c60) (0xc002cde960) Stream removed, broadcasting: 1 I0622 13:14:54.192347 7 log.go:172] (0xc001068c60) (0xc002cdea00) Stream removed, broadcasting: 3 I0622 13:14:54.192358 7 log.go:172] (0xc001068c60) (0xc001dea820) Stream removed, broadcasting: 5 Jun 22 13:14:54.192: INFO: Exec stderr: "" Jun 22 13:14:54.192: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:14:54.192: INFO: >>> kubeConfig: /root/.kube/config I0622 13:14:54.228284 7 log.go:172] (0xc002e8e0b0) (0xc001deabe0) Create stream I0622 13:14:54.228308 7 log.go:172] (0xc002e8e0b0) (0xc001deabe0) Stream added, broadcasting: 1 I0622 13:14:54.230997 7 log.go:172] (0xc002e8e0b0) Reply frame received for 1 I0622 13:14:54.231047 7 log.go:172] (0xc002e8e0b0) (0xc0012a7c20) Create stream I0622 13:14:54.231059 7 log.go:172] (0xc002e8e0b0) (0xc0012a7c20) Stream added, broadcasting: 3 I0622 13:14:54.231882 7 log.go:172] (0xc002e8e0b0) Reply frame received for 3 I0622 13:14:54.231912 7 log.go:172] (0xc002e8e0b0) (0xc0012a7e00) Create stream I0622 13:14:54.231927 7 log.go:172] (0xc002e8e0b0) (0xc0012a7e00) Stream added, broadcasting: 5 I0622 13:14:54.232603 7 log.go:172] (0xc002e8e0b0) Reply frame received for 5 I0622 13:14:54.301979 7 log.go:172] (0xc002e8e0b0) Data frame received for 5 I0622 13:14:54.302007 7 log.go:172] (0xc0012a7e00) (5) Data frame handling I0622 13:14:54.302034 7 log.go:172] (0xc002e8e0b0) Data frame received for 3 I0622 13:14:54.302042 7 log.go:172] (0xc0012a7c20) (3) Data frame handling I0622 13:14:54.302052 7 log.go:172] (0xc0012a7c20) (3) Data frame sent I0622 13:14:54.302059 7 log.go:172] (0xc002e8e0b0) Data frame received for 3 I0622 13:14:54.302068 7 log.go:172] (0xc0012a7c20) (3) Data frame handling I0622 13:14:54.303304 7 log.go:172] (0xc002e8e0b0) Data frame received for 1 I0622 13:14:54.303321 7 log.go:172] (0xc001deabe0) (1) Data frame handling I0622 13:14:54.303331 7 log.go:172] (0xc001deabe0) (1) Data frame sent I0622 13:14:54.303350 7 log.go:172] (0xc002e8e0b0) (0xc001deabe0) Stream removed, broadcasting: 1 I0622 13:14:54.303362 7 log.go:172] (0xc002e8e0b0) Go away received I0622 13:14:54.303436 7 log.go:172] (0xc002e8e0b0) (0xc001deabe0) Stream removed, broadcasting: 1 I0622 13:14:54.303456 7 log.go:172] (0xc002e8e0b0) (0xc0012a7c20) Stream removed, broadcasting: 3 I0622 13:14:54.303462 7 log.go:172] (0xc002e8e0b0) (0xc0012a7e00) Stream removed, broadcasting: 5 Jun 22 13:14:54.303: INFO: Exec stderr: "" Jun 22 13:14:54.303: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:14:54.303: INFO: >>> kubeConfig: /root/.kube/config I0622 13:14:54.327745 7 log.go:172] (0xc002da2210) (0xc000e9ad20) Create stream I0622 13:14:54.327770 7 log.go:172] (0xc002da2210) (0xc000e9ad20) Stream added, broadcasting: 1 I0622 13:14:54.330249 7 log.go:172] (0xc002da2210) Reply frame received for 1 I0622 13:14:54.330287 7 log.go:172] (0xc002da2210) (0xc000e9ae60) Create stream I0622 13:14:54.330300 7 log.go:172] (0xc002da2210) (0xc000e9ae60) Stream added, broadcasting: 3 I0622 13:14:54.331220 7 log.go:172] (0xc002da2210) Reply frame received for 3 I0622 13:14:54.331257 7 log.go:172] (0xc002da2210) (0xc001dead20) Create stream I0622 13:14:54.331266 7 log.go:172] (0xc002da2210) (0xc001dead20) Stream added, broadcasting: 5 I0622 13:14:54.331920 7 log.go:172] (0xc002da2210) Reply frame received for 5 I0622 13:14:54.373686 7 log.go:172] (0xc002da2210) Data frame received for 3 I0622 13:14:54.373711 7 log.go:172] (0xc000e9ae60) (3) Data frame handling I0622 13:14:54.373719 7 log.go:172] (0xc000e9ae60) (3) Data frame sent I0622 13:14:54.373724 7 log.go:172] (0xc002da2210) Data frame received for 3 I0622 13:14:54.373728 7 log.go:172] (0xc000e9ae60) (3) Data frame handling I0622 13:14:54.373783 7 log.go:172] (0xc002da2210) Data frame received for 5 I0622 13:14:54.373797 7 log.go:172] (0xc001dead20) (5) Data frame handling I0622 13:14:54.375007 7 log.go:172] (0xc002da2210) Data frame received for 1 I0622 13:14:54.375035 7 log.go:172] (0xc000e9ad20) (1) Data frame handling I0622 13:14:54.375048 7 log.go:172] (0xc000e9ad20) (1) Data frame sent I0622 13:14:54.375062 7 log.go:172] (0xc002da2210) (0xc000e9ad20) Stream removed, broadcasting: 1 I0622 13:14:54.375082 7 log.go:172] (0xc002da2210) Go away received I0622 13:14:54.375261 7 log.go:172] (0xc002da2210) (0xc000e9ad20) Stream removed, broadcasting: 1 I0622 13:14:54.375280 7 log.go:172] (0xc002da2210) (0xc000e9ae60) Stream removed, broadcasting: 3 I0622 13:14:54.375291 7 log.go:172] (0xc002da2210) (0xc001dead20) Stream removed, broadcasting: 5 Jun 22 13:14:54.375: INFO: Exec stderr: "" Jun 22 13:14:54.375: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2219 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:14:54.375: INFO: >>> kubeConfig: /root/.kube/config I0622 13:14:54.406733 7 log.go:172] (0xc001069b80) (0xc002cdee60) Create stream I0622 13:14:54.406752 7 log.go:172] (0xc001069b80) (0xc002cdee60) Stream added, broadcasting: 1 I0622 13:14:54.410003 7 log.go:172] (0xc001069b80) Reply frame received for 1 I0622 13:14:54.410042 7 log.go:172] (0xc001069b80) (0xc002cdef00) Create stream I0622 13:14:54.410064 7 log.go:172] (0xc001069b80) (0xc002cdef00) Stream added, broadcasting: 3 I0622 13:14:54.411162 7 log.go:172] (0xc001069b80) Reply frame received for 3 I0622 13:14:54.411219 7 log.go:172] (0xc001069b80) (0xc002cdefa0) Create stream I0622 13:14:54.411237 7 log.go:172] (0xc001069b80) (0xc002cdefa0) Stream added, broadcasting: 5 I0622 13:14:54.412369 7 log.go:172] (0xc001069b80) Reply frame received for 5 I0622 13:14:54.477243 7 log.go:172] (0xc001069b80) Data frame received for 5 I0622 13:14:54.477270 7 log.go:172] (0xc002cdefa0) (5) Data frame handling I0622 13:14:54.477294 7 log.go:172] (0xc001069b80) Data frame received for 3 I0622 13:14:54.477302 7 log.go:172] (0xc002cdef00) (3) Data frame handling I0622 13:14:54.477313 7 log.go:172] (0xc002cdef00) (3) Data frame sent I0622 13:14:54.477321 7 log.go:172] (0xc001069b80) Data frame received for 3 I0622 13:14:54.477328 7 log.go:172] (0xc002cdef00) (3) Data frame handling I0622 13:14:54.478960 7 log.go:172] (0xc001069b80) Data frame received for 1 I0622 13:14:54.479001 7 log.go:172] (0xc002cdee60) (1) Data frame handling I0622 13:14:54.479018 7 log.go:172] (0xc002cdee60) (1) Data frame sent I0622 13:14:54.479040 7 log.go:172] (0xc001069b80) (0xc002cdee60) Stream removed, broadcasting: 1 I0622 13:14:54.479057 7 log.go:172] (0xc001069b80) Go away received I0622 13:14:54.479159 7 log.go:172] (0xc001069b80) (0xc002cdee60) Stream removed, broadcasting: 1 I0622 13:14:54.479181 7 log.go:172] (0xc001069b80) (0xc002cdef00) Stream removed, broadcasting: 3 I0622 13:14:54.479196 7 log.go:172] (0xc001069b80) (0xc002cdefa0) Stream removed, broadcasting: 5 Jun 22 13:14:54.479: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:14:54.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2219" for this suite. Jun 22 13:15:46.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:15:46.574: INFO: namespace e2e-kubelet-etc-hosts-2219 deletion completed in 52.090792988s • [SLOW TEST:63.218 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:15:46.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4434.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4434.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4434.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4434.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4434.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4434.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 13:15:54.723: INFO: DNS probes using dns-4434/dns-test-662230b9-a981-4be8-8079-eb390e1985c2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:15:54.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4434" for this suite. Jun 22 13:16:00.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:16:00.866: INFO: namespace dns-4434 deletion completed in 6.107203391s • [SLOW TEST:14.292 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:16:00.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 22 13:16:00.905: INFO: Waiting up to 5m0s for pod "pod-78114d14-6772-470f-87b1-d306c9a4964d" in namespace "emptydir-1515" to be "success or failure" Jun 22 13:16:00.939: INFO: Pod "pod-78114d14-6772-470f-87b1-d306c9a4964d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.447823ms Jun 22 13:16:02.943: INFO: Pod "pod-78114d14-6772-470f-87b1-d306c9a4964d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038563195s Jun 22 13:16:04.947: INFO: Pod "pod-78114d14-6772-470f-87b1-d306c9a4964d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042477295s STEP: Saw pod success Jun 22 13:16:04.947: INFO: Pod "pod-78114d14-6772-470f-87b1-d306c9a4964d" satisfied condition "success or failure" Jun 22 13:16:04.950: INFO: Trying to get logs from node iruya-worker2 pod pod-78114d14-6772-470f-87b1-d306c9a4964d container test-container: STEP: delete the pod Jun 22 13:16:04.976: INFO: Waiting for pod pod-78114d14-6772-470f-87b1-d306c9a4964d to disappear Jun 22 13:16:04.980: INFO: Pod pod-78114d14-6772-470f-87b1-d306c9a4964d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:16:04.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1515" for this suite. Jun 22 13:16:10.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:16:11.076: INFO: namespace emptydir-1515 deletion completed in 6.092889461s • [SLOW TEST:10.209 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:16:11.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 13:16:11.172: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 22 13:16:16.178: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 22 13:16:16.178: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 22 13:16:16.215: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8174,SelfLink:/apis/apps/v1/namespaces/deployment-8174/deployments/test-cleanup-deployment,UID:a6599d71-5fd1-420e-8a8b-c798f10bb07b,ResourceVersion:17855478,Generation:1,CreationTimestamp:2020-06-22 13:16:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jun 22 13:16:16.294: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8174,SelfLink:/apis/apps/v1/namespaces/deployment-8174/replicasets/test-cleanup-deployment-55bbcbc84c,UID:a63d3816-e19c-4860-bf21-dace73a57c5b,ResourceVersion:17855480,Generation:1,CreationTimestamp:2020-06-22 13:16:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a6599d71-5fd1-420e-8a8b-c798f10bb07b 0xc002fb88d7 0xc002fb88d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 13:16:16.294: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 22 13:16:16.295: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8174,SelfLink:/apis/apps/v1/namespaces/deployment-8174/replicasets/test-cleanup-controller,UID:a6f19558-d928-49f9-aec6-b4c2ecc325bd,ResourceVersion:17855479,Generation:1,CreationTimestamp:2020-06-22 13:16:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a6599d71-5fd1-420e-8a8b-c798f10bb07b 0xc002fb8807 0xc002fb8808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 22 13:16:16.328: INFO: Pod "test-cleanup-controller-qcd7r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-qcd7r,GenerateName:test-cleanup-controller-,Namespace:deployment-8174,SelfLink:/api/v1/namespaces/deployment-8174/pods/test-cleanup-controller-qcd7r,UID:32d138d5-2b0a-4508-82c7-c7e6bc943358,ResourceVersion:17855473,Generation:0,CreationTimestamp:2020-06-22 13:16:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller a6f19558-d928-49f9-aec6-b4c2ecc325bd 0xc0025ef677 0xc0025ef678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bmp4t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmp4t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bmp4t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ef6f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ef710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:16:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:16:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:16:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:16:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.134,StartTime:2020-06-22 13:16:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 13:16:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://01de33110fee0ab2f30df1057873c8cfe854329d663792a7a50eadf6808a436e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 13:16:16.328: INFO: Pod "test-cleanup-deployment-55bbcbc84c-rr8ss" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-rr8ss,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8174,SelfLink:/api/v1/namespaces/deployment-8174/pods/test-cleanup-deployment-55bbcbc84c-rr8ss,UID:2db676ed-c94d-43f7-aed5-4c3e6bb9db1f,ResourceVersion:17855484,Generation:0,CreationTimestamp:2020-06-22 13:16:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c a63d3816-e19c-4860-bf21-dace73a57c5b 0xc0025ef8e7 0xc0025ef8e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bmp4t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bmp4t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-bmp4t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025efa10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025efa40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:16:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:16:16.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8174" for this suite. Jun 22 13:16:22.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:16:22.450: INFO: namespace deployment-8174 deletion completed in 6.096021795s • [SLOW TEST:11.374 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:16:22.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 22 13:16:22.553: INFO: Waiting up to 5m0s for pod "downward-api-3a7c27fc-256b-4fb0-8d39-4a56267a25b6" in namespace "downward-api-2228" to be "success or failure" Jun 22 13:16:22.556: INFO: Pod "downward-api-3a7c27fc-256b-4fb0-8d39-4a56267a25b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.806363ms Jun 22 13:16:24.560: INFO: Pod "downward-api-3a7c27fc-256b-4fb0-8d39-4a56267a25b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007352808s Jun 22 13:16:26.564: INFO: Pod "downward-api-3a7c27fc-256b-4fb0-8d39-4a56267a25b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011030035s STEP: Saw pod success Jun 22 13:16:26.564: INFO: Pod "downward-api-3a7c27fc-256b-4fb0-8d39-4a56267a25b6" satisfied condition "success or failure" Jun 22 13:16:26.567: INFO: Trying to get logs from node iruya-worker pod downward-api-3a7c27fc-256b-4fb0-8d39-4a56267a25b6 container dapi-container: STEP: delete the pod Jun 22 13:16:26.581: INFO: Waiting for pod downward-api-3a7c27fc-256b-4fb0-8d39-4a56267a25b6 to disappear Jun 22 13:16:26.586: INFO: Pod downward-api-3a7c27fc-256b-4fb0-8d39-4a56267a25b6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:16:26.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2228" for this suite. Jun 22 13:16:32.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:16:32.669: INFO: namespace downward-api-2228 deletion completed in 6.080800524s • [SLOW TEST:10.219 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:16:32.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 13:16:32.763: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e56721b6-1dec-4aef-bec8-693c5eff4f7b" in namespace "projected-4408" to be "success or failure" Jun 22 13:16:32.766: INFO: Pod "downwardapi-volume-e56721b6-1dec-4aef-bec8-693c5eff4f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.232408ms Jun 22 13:16:34.770: INFO: Pod "downwardapi-volume-e56721b6-1dec-4aef-bec8-693c5eff4f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007469755s Jun 22 13:16:36.774: INFO: Pod "downwardapi-volume-e56721b6-1dec-4aef-bec8-693c5eff4f7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011314468s STEP: Saw pod success Jun 22 13:16:36.774: INFO: Pod "downwardapi-volume-e56721b6-1dec-4aef-bec8-693c5eff4f7b" satisfied condition "success or failure" Jun 22 13:16:36.776: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e56721b6-1dec-4aef-bec8-693c5eff4f7b container client-container: STEP: delete the pod Jun 22 13:16:36.906: INFO: Waiting for pod downwardapi-volume-e56721b6-1dec-4aef-bec8-693c5eff4f7b to disappear Jun 22 13:16:36.922: INFO: Pod downwardapi-volume-e56721b6-1dec-4aef-bec8-693c5eff4f7b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:16:36.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4408" for this suite. Jun 22 13:16:42.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:16:43.024: INFO: namespace projected-4408 deletion completed in 6.097937459s • [SLOW TEST:10.354 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:16:43.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-57af98d7-bcbd-4468-9f92-613d889481fc in namespace container-probe-6894 Jun 22 13:16:47.160: INFO: Started pod busybox-57af98d7-bcbd-4468-9f92-613d889481fc in namespace container-probe-6894 STEP: checking the pod's current state and verifying that restartCount is present Jun 22 13:16:47.164: INFO: Initial restart count of pod busybox-57af98d7-bcbd-4468-9f92-613d889481fc is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:20:47.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6894" for this suite. Jun 22 13:20:53.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:20:53.910: INFO: namespace container-probe-6894 deletion completed in 6.102340476s • [SLOW TEST:250.886 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:20:53.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-dcb6c741-fd28-4dd7-a072-f69522d3b8b8 STEP: Creating a pod to test consume secrets Jun 22 13:20:54.004: INFO: Waiting up to 5m0s for pod "pod-secrets-c4d4b7e1-296a-4ee6-a81c-fbb3662be877" in namespace "secrets-4478" to be "success or failure" Jun 22 13:20:54.010: INFO: Pod "pod-secrets-c4d4b7e1-296a-4ee6-a81c-fbb3662be877": Phase="Pending", Reason="", readiness=false. Elapsed: 5.618075ms Jun 22 13:20:56.014: INFO: Pod "pod-secrets-c4d4b7e1-296a-4ee6-a81c-fbb3662be877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009787913s Jun 22 13:20:58.018: INFO: Pod "pod-secrets-c4d4b7e1-296a-4ee6-a81c-fbb3662be877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013821079s STEP: Saw pod success Jun 22 13:20:58.018: INFO: Pod "pod-secrets-c4d4b7e1-296a-4ee6-a81c-fbb3662be877" satisfied condition "success or failure" Jun 22 13:20:58.020: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c4d4b7e1-296a-4ee6-a81c-fbb3662be877 container secret-volume-test: STEP: delete the pod Jun 22 13:20:58.041: INFO: Waiting for pod pod-secrets-c4d4b7e1-296a-4ee6-a81c-fbb3662be877 to disappear Jun 22 13:20:58.045: INFO: Pod pod-secrets-c4d4b7e1-296a-4ee6-a81c-fbb3662be877 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:20:58.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4478" for this suite. Jun 22 13:21:04.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:21:04.145: INFO: namespace secrets-4478 deletion completed in 6.097572271s • [SLOW TEST:10.235 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:21:04.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jun 22 13:21:04.209: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jun 22 13:21:04.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6421' Jun 22 13:21:04.514: INFO: stderr: "" Jun 22 13:21:04.514: INFO: stdout: "service/redis-slave created\n" Jun 22 13:21:04.514: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jun 22 13:21:04.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6421' Jun 22 13:21:04.800: INFO: stderr: "" Jun 22 13:21:04.800: INFO: stdout: "service/redis-master created\n" Jun 22 13:21:04.801: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 22 13:21:04.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6421' Jun 22 13:21:05.176: INFO: stderr: "" Jun 22 13:21:05.176: INFO: stdout: "service/frontend created\n" Jun 22 13:21:05.176: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jun 22 13:21:05.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6421' Jun 22 13:21:05.444: INFO: stderr: "" Jun 22 13:21:05.444: INFO: stdout: "deployment.apps/frontend created\n" Jun 22 13:21:05.444: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 22 13:21:05.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6421' Jun 22 13:21:05.736: INFO: stderr: "" Jun 22 13:21:05.736: INFO: stdout: "deployment.apps/redis-master created\n" Jun 22 13:21:05.736: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jun 22 13:21:05.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6421' Jun 22 13:21:06.066: INFO: stderr: "" Jun 22 13:21:06.066: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jun 22 13:21:06.066: INFO: Waiting for all frontend pods to be Running. Jun 22 13:21:16.117: INFO: Waiting for frontend to serve content. Jun 22 13:21:16.182: INFO: Trying to add a new entry to the guestbook. Jun 22 13:21:16.198: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 22 13:21:16.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6421' Jun 22 13:21:16.376: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 13:21:16.376: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jun 22 13:21:16.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6421' Jun 22 13:21:16.531: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 13:21:16.531: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 22 13:21:16.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6421' Jun 22 13:21:16.724: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 13:21:16.724: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 22 13:21:16.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6421' Jun 22 13:21:16.829: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 13:21:16.829: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 22 13:21:16.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6421' Jun 22 13:21:16.952: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 13:21:16.952: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 22 13:21:16.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6421' Jun 22 13:21:17.103: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 13:21:17.103: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:21:17.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6421" for this suite. Jun 22 13:21:55.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:21:55.278: INFO: namespace kubectl-6421 deletion completed in 38.144758329s • [SLOW TEST:51.132 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:21:55.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 22 13:21:55.405: INFO: Waiting up to 5m0s for pod "pod-d71944de-e024-49f2-ba92-6533c75f82ed" in namespace "emptydir-6852" to be "success or failure" Jun 22 13:21:55.415: INFO: Pod "pod-d71944de-e024-49f2-ba92-6533c75f82ed": Phase="Pending", Reason="", readiness=false. Elapsed: 9.373647ms Jun 22 13:21:57.418: INFO: Pod "pod-d71944de-e024-49f2-ba92-6533c75f82ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012388184s Jun 22 13:21:59.423: INFO: Pod "pod-d71944de-e024-49f2-ba92-6533c75f82ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01754616s STEP: Saw pod success Jun 22 13:21:59.423: INFO: Pod "pod-d71944de-e024-49f2-ba92-6533c75f82ed" satisfied condition "success or failure" Jun 22 13:21:59.426: INFO: Trying to get logs from node iruya-worker pod pod-d71944de-e024-49f2-ba92-6533c75f82ed container test-container: STEP: delete the pod Jun 22 13:21:59.462: INFO: Waiting for pod pod-d71944de-e024-49f2-ba92-6533c75f82ed to disappear Jun 22 13:21:59.468: INFO: Pod pod-d71944de-e024-49f2-ba92-6533c75f82ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:21:59.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6852" for this suite. Jun 22 13:22:05.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:22:05.580: INFO: namespace emptydir-6852 deletion completed in 6.10915467s • [SLOW TEST:10.302 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:22:05.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jun 22 13:22:05.666: INFO: Waiting up to 5m0s for pod "var-expansion-2a98bb34-89d6-4fd5-aef2-535f7eaa0f09" in namespace "var-expansion-7472" to be "success or failure" Jun 22 13:22:05.671: INFO: Pod "var-expansion-2a98bb34-89d6-4fd5-aef2-535f7eaa0f09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.653163ms Jun 22 13:22:07.675: INFO: Pod "var-expansion-2a98bb34-89d6-4fd5-aef2-535f7eaa0f09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009193147s Jun 22 13:22:09.679: INFO: Pod "var-expansion-2a98bb34-89d6-4fd5-aef2-535f7eaa0f09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013217908s STEP: Saw pod success Jun 22 13:22:09.679: INFO: Pod "var-expansion-2a98bb34-89d6-4fd5-aef2-535f7eaa0f09" satisfied condition "success or failure" Jun 22 13:22:09.682: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-2a98bb34-89d6-4fd5-aef2-535f7eaa0f09 container dapi-container: STEP: delete the pod Jun 22 13:22:09.743: INFO: Waiting for pod var-expansion-2a98bb34-89d6-4fd5-aef2-535f7eaa0f09 to disappear Jun 22 13:22:09.749: INFO: Pod var-expansion-2a98bb34-89d6-4fd5-aef2-535f7eaa0f09 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:22:09.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7472" for this suite. Jun 22 13:22:15.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:22:15.836: INFO: namespace var-expansion-7472 deletion completed in 6.08408548s • [SLOW TEST:10.255 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:22:15.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:22:20.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2695" for this suite. Jun 22 13:22:26.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:22:26.123: INFO: namespace emptydir-wrapper-2695 deletion completed in 6.103003555s • [SLOW TEST:10.286 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:22:26.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0622 13:23:06.219873 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 13:23:06.219: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:23:06.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2761" for this suite. Jun 22 13:23:14.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:23:14.332: INFO: namespace gc-2761 deletion completed in 8.109423797s • [SLOW TEST:48.209 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:23:14.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-07a8187e-396b-4ef1-9487-a3a0f90e6485 STEP: Creating configMap with name cm-test-opt-upd-cecdff9e-dfa4-4ba1-86f6-1c6ea1a04dcb STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-07a8187e-396b-4ef1-9487-a3a0f90e6485 STEP: Updating configmap cm-test-opt-upd-cecdff9e-dfa4-4ba1-86f6-1c6ea1a04dcb STEP: Creating configMap with name cm-test-opt-create-69f5def7-cb65-4475-84ee-759708e50361 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:23:27.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8670" for this suite. Jun 22 13:23:49.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:23:49.377: INFO: namespace projected-8670 deletion completed in 22.21172313s • [SLOW TEST:35.044 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:23:49.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-a33b169a-7169-44d0-bf1a-d6c3550c3a29 STEP: Creating a pod to test consume secrets Jun 22 13:23:49.498: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f417e107-9c7d-4a50-ad43-dca9e269d64b" in namespace "projected-2961" to be "success or failure" Jun 22 13:23:49.508: INFO: Pod "pod-projected-secrets-f417e107-9c7d-4a50-ad43-dca9e269d64b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.97233ms Jun 22 13:23:51.515: INFO: Pod "pod-projected-secrets-f417e107-9c7d-4a50-ad43-dca9e269d64b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017326264s Jun 22 13:23:53.540: INFO: Pod "pod-projected-secrets-f417e107-9c7d-4a50-ad43-dca9e269d64b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041655872s STEP: Saw pod success Jun 22 13:23:53.540: INFO: Pod "pod-projected-secrets-f417e107-9c7d-4a50-ad43-dca9e269d64b" satisfied condition "success or failure" Jun 22 13:23:53.543: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-f417e107-9c7d-4a50-ad43-dca9e269d64b container projected-secret-volume-test: STEP: delete the pod Jun 22 13:23:53.575: INFO: Waiting for pod pod-projected-secrets-f417e107-9c7d-4a50-ad43-dca9e269d64b to disappear Jun 22 13:23:53.589: INFO: Pod pod-projected-secrets-f417e107-9c7d-4a50-ad43-dca9e269d64b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:23:53.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2961" for this suite. Jun 22 13:23:59.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:23:59.679: INFO: namespace projected-2961 deletion completed in 6.085755317s • [SLOW TEST:10.301 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:23:59.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 22 13:23:59.785: INFO: Waiting up to 5m0s for pod "pod-d5cd4e47-b35d-4732-9f33-43ab681fcdfb" in namespace "emptydir-2239" to be "success or failure" Jun 22 13:23:59.788: INFO: Pod "pod-d5cd4e47-b35d-4732-9f33-43ab681fcdfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.738607ms Jun 22 13:24:01.845: INFO: Pod "pod-d5cd4e47-b35d-4732-9f33-43ab681fcdfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059427257s Jun 22 13:24:03.849: INFO: Pod "pod-d5cd4e47-b35d-4732-9f33-43ab681fcdfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063400111s STEP: Saw pod success Jun 22 13:24:03.849: INFO: Pod "pod-d5cd4e47-b35d-4732-9f33-43ab681fcdfb" satisfied condition "success or failure" Jun 22 13:24:03.851: INFO: Trying to get logs from node iruya-worker pod pod-d5cd4e47-b35d-4732-9f33-43ab681fcdfb container test-container: STEP: delete the pod Jun 22 13:24:03.872: INFO: Waiting for pod pod-d5cd4e47-b35d-4732-9f33-43ab681fcdfb to disappear Jun 22 13:24:03.876: INFO: Pod pod-d5cd4e47-b35d-4732-9f33-43ab681fcdfb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:24:03.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2239" for this suite. Jun 22 13:24:09.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:24:09.976: INFO: namespace emptydir-2239 deletion completed in 6.096736258s • [SLOW TEST:10.296 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:24:09.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 13:24:10.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5169' Jun 22 13:24:12.724: INFO: stderr: "" Jun 22 13:24:12.724: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jun 22 13:24:12.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5169' Jun 22 13:24:21.877: INFO: stderr: "" Jun 22 13:24:21.877: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:24:21.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5169" for this suite. Jun 22 13:24:27.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:24:28.018: INFO: namespace kubectl-5169 deletion completed in 6.124931911s • [SLOW TEST:18.042 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:24:28.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jun 22 13:24:28.099: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:24:28.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9643" for this suite. Jun 22 13:24:34.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:24:34.291: INFO: namespace kubectl-9643 deletion completed in 6.085686754s • [SLOW TEST:6.273 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:24:34.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 22 13:24:34.354: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jun 22 13:24:35.042: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 22 13:24:37.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429075, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429075, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429075, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429075, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 13:24:39.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429075, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429075, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429075, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429075, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 13:24:41.843: INFO: Waited 617.3734ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:24:42.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-497" for this suite. Jun 22 13:24:48.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:24:48.508: INFO: namespace aggregator-497 deletion completed in 6.206666545s • [SLOW TEST:14.217 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:24:48.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 22 13:24:56.635: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:24:56.650: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:24:58.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:24:58.655: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:00.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:00.655: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:02.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:02.654: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:04.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:04.653: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:06.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:06.653: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:08.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:08.654: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:10.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:10.653: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:12.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:12.654: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:14.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:14.972: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:16.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:16.655: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:18.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:18.655: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:20.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:20.655: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:22.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:22.654: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:24.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:24.655: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:26.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:26.655: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:28.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:28.655: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:30.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:30.654: INFO: Pod pod-with-prestop-exec-hook still exists Jun 22 13:25:32.650: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 22 13:25:32.654: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:25:32.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4118" for this suite. Jun 22 13:25:56.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:25:56.773: INFO: namespace container-lifecycle-hook-4118 deletion completed in 24.10874154s • [SLOW TEST:68.263 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:25:56.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jun 22 13:25:56.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7818' Jun 22 13:25:57.274: INFO: stderr: "" Jun 22 13:25:57.274: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jun 22 13:25:58.314: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:25:58.314: INFO: Found 0 / 1 Jun 22 13:25:59.279: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:25:59.279: INFO: Found 0 / 1 Jun 22 13:26:00.292: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:26:00.293: INFO: Found 0 / 1 Jun 22 13:26:01.279: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:26:01.279: INFO: Found 0 / 1 Jun 22 13:26:02.317: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:26:02.317: INFO: Found 0 / 1 Jun 22 13:26:03.279: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:26:03.279: INFO: Found 0 / 1 Jun 22 13:26:04.278: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:26:04.278: INFO: Found 1 / 1 Jun 22 13:26:04.279: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 22 13:26:04.281: INFO: Selector matched 1 pods for map[app:redis] Jun 22 13:26:04.281: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 22 13:26:04.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ctg9k redis-master --namespace=kubectl-7818' Jun 22 13:26:04.439: INFO: stderr: "" Jun 22 13:26:04.439: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Jun 13:26:03.178 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jun 13:26:03.178 # Server started, Redis version 3.2.12\n1:M 22 Jun 13:26:03.178 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jun 13:26:03.178 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 22 13:26:04.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ctg9k redis-master --namespace=kubectl-7818 --tail=1' Jun 22 13:26:04.555: INFO: stderr: "" Jun 22 13:26:04.555: INFO: stdout: "1:M 22 Jun 13:26:03.178 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 22 13:26:04.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ctg9k redis-master --namespace=kubectl-7818 --limit-bytes=1' Jun 22 13:26:04.655: INFO: stderr: "" Jun 22 13:26:04.655: INFO: stdout: " " STEP: exposing timestamps Jun 22 13:26:04.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ctg9k redis-master --namespace=kubectl-7818 --tail=1 --timestamps' Jun 22 13:26:04.750: INFO: stderr: "" Jun 22 13:26:04.750: INFO: stdout: "2020-06-22T13:26:03.178451125Z 1:M 22 Jun 13:26:03.178 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 22 13:26:07.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ctg9k redis-master --namespace=kubectl-7818 --since=1s' Jun 22 13:26:07.352: INFO: stderr: "" Jun 22 13:26:07.352: INFO: stdout: "" Jun 22 13:26:07.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ctg9k redis-master --namespace=kubectl-7818 --since=24h' Jun 22 13:26:07.450: INFO: stderr: "" Jun 22 13:26:07.450: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Jun 13:26:03.178 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jun 13:26:03.178 # Server started, Redis version 3.2.12\n1:M 22 Jun 13:26:03.178 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jun 13:26:03.178 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jun 22 13:26:07.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7818' Jun 22 13:26:07.578: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 13:26:07.578: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 22 13:26:07.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7818' Jun 22 13:26:07.725: INFO: stderr: "No resources found.\n" Jun 22 13:26:07.725: INFO: stdout: "" Jun 22 13:26:07.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7818 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 13:26:07.870: INFO: stderr: "" Jun 22 13:26:07.870: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:26:07.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7818" for this suite. Jun 22 13:26:13.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:26:13.967: INFO: namespace kubectl-7818 deletion completed in 6.093892528s • [SLOW TEST:17.194 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:26:13.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1590 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1590 STEP: Creating statefulset with conflicting port in namespace statefulset-1590 STEP: Waiting until pod test-pod will start running in namespace statefulset-1590 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1590 Jun 22 13:26:21.380: INFO: Observed stateful pod in namespace: statefulset-1590, name: ss-0, uid: 10d7d326-28be-468b-84a4-9df1f2718017, status phase: Pending. Waiting for statefulset controller to delete. Jun 22 13:26:22.163: INFO: Observed stateful pod in namespace: statefulset-1590, name: ss-0, uid: 10d7d326-28be-468b-84a4-9df1f2718017, status phase: Failed. Waiting for statefulset controller to delete. Jun 22 13:26:22.279: INFO: Observed stateful pod in namespace: statefulset-1590, name: ss-0, uid: 10d7d326-28be-468b-84a4-9df1f2718017, status phase: Failed. Waiting for statefulset controller to delete. Jun 22 13:26:22.296: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1590 STEP: Removing pod with conflicting port in namespace statefulset-1590 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1590 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 22 13:26:38.655: INFO: Deleting all statefulset in ns statefulset-1590 Jun 22 13:26:38.659: INFO: Scaling statefulset ss to 0 Jun 22 13:26:48.692: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 13:26:48.695: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:26:48.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1590" for this suite. Jun 22 13:26:56.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:26:56.939: INFO: namespace statefulset-1590 deletion completed in 8.1838456s • [SLOW TEST:42.971 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:26:56.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-58380e05-78de-4ca7-9f17-0bcc61df978f STEP: Creating secret with name s-test-opt-upd-d476a4b5-4760-47f7-b109-f64db37b0602 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-58380e05-78de-4ca7-9f17-0bcc61df978f STEP: Updating secret s-test-opt-upd-d476a4b5-4760-47f7-b109-f64db37b0602 STEP: Creating secret with name s-test-opt-create-8b715d9d-04b1-49b1-b2db-ac428507d9dc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:28:20.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1143" for this suite. Jun 22 13:28:44.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:28:44.450: INFO: namespace projected-1143 deletion completed in 24.172971689s • [SLOW TEST:107.510 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:28:44.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:28:52.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2257" for this suite. Jun 22 13:29:42.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:29:42.945: INFO: namespace kubelet-test-2257 deletion completed in 50.205002668s • [SLOW TEST:58.495 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:29:42.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-550445ad-c976-4e65-af80-69695995612f STEP: Creating a pod to test consume configMaps Jun 22 13:29:43.164: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3df9cdb-7df3-4286-9691-8eb58f46828e" in namespace "projected-2595" to be "success or failure" Jun 22 13:29:43.211: INFO: Pod "pod-projected-configmaps-a3df9cdb-7df3-4286-9691-8eb58f46828e": Phase="Pending", Reason="", readiness=false. Elapsed: 46.710687ms Jun 22 13:29:45.215: INFO: Pod "pod-projected-configmaps-a3df9cdb-7df3-4286-9691-8eb58f46828e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050818681s Jun 22 13:29:47.219: INFO: Pod "pod-projected-configmaps-a3df9cdb-7df3-4286-9691-8eb58f46828e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055259566s Jun 22 13:29:49.240: INFO: Pod "pod-projected-configmaps-a3df9cdb-7df3-4286-9691-8eb58f46828e": Phase="Running", Reason="", readiness=true. Elapsed: 6.07545046s Jun 22 13:29:51.244: INFO: Pod "pod-projected-configmaps-a3df9cdb-7df3-4286-9691-8eb58f46828e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079492949s STEP: Saw pod success Jun 22 13:29:51.244: INFO: Pod "pod-projected-configmaps-a3df9cdb-7df3-4286-9691-8eb58f46828e" satisfied condition "success or failure" Jun 22 13:29:51.246: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-a3df9cdb-7df3-4286-9691-8eb58f46828e container projected-configmap-volume-test: STEP: delete the pod Jun 22 13:29:51.361: INFO: Waiting for pod pod-projected-configmaps-a3df9cdb-7df3-4286-9691-8eb58f46828e to disappear Jun 22 13:29:51.380: INFO: Pod pod-projected-configmaps-a3df9cdb-7df3-4286-9691-8eb58f46828e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:29:51.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2595" for this suite. Jun 22 13:29:57.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:29:57.516: INFO: namespace projected-2595 deletion completed in 6.133361077s • [SLOW TEST:14.570 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:29:57.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jun 22 13:29:57.657: INFO: Waiting up to 5m0s for pod "client-containers-7c46da84-0271-4f08-bde2-ee672d2949fa" in namespace "containers-8204" to be "success or failure" Jun 22 13:29:57.685: INFO: Pod "client-containers-7c46da84-0271-4f08-bde2-ee672d2949fa": Phase="Pending", Reason="", readiness=false. Elapsed: 28.165298ms Jun 22 13:29:59.993: INFO: Pod "client-containers-7c46da84-0271-4f08-bde2-ee672d2949fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335986707s Jun 22 13:30:02.180: INFO: Pod "client-containers-7c46da84-0271-4f08-bde2-ee672d2949fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.52320943s Jun 22 13:30:04.184: INFO: Pod "client-containers-7c46da84-0271-4f08-bde2-ee672d2949fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.526843625s STEP: Saw pod success Jun 22 13:30:04.184: INFO: Pod "client-containers-7c46da84-0271-4f08-bde2-ee672d2949fa" satisfied condition "success or failure" Jun 22 13:30:04.186: INFO: Trying to get logs from node iruya-worker pod client-containers-7c46da84-0271-4f08-bde2-ee672d2949fa container test-container: STEP: delete the pod Jun 22 13:30:04.232: INFO: Waiting for pod client-containers-7c46da84-0271-4f08-bde2-ee672d2949fa to disappear Jun 22 13:30:04.242: INFO: Pod client-containers-7c46da84-0271-4f08-bde2-ee672d2949fa no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:30:04.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8204" for this suite. Jun 22 13:30:10.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:30:10.393: INFO: namespace containers-8204 deletion completed in 6.147529311s • [SLOW TEST:12.877 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:30:10.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3910 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-3910 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3910 Jun 22 13:30:10.720: INFO: Found 0 stateful pods, waiting for 1 Jun 22 13:30:20.726: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 22 13:30:20.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3910 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 13:30:21.042: INFO: stderr: "I0622 13:30:20.874963 774 log.go:172] (0xc000a240b0) (0xc0008e06e0) Create stream\nI0622 13:30:20.875018 774 log.go:172] (0xc000a240b0) (0xc0008e06e0) Stream added, broadcasting: 1\nI0622 13:30:20.876855 774 log.go:172] (0xc000a240b0) Reply frame received for 1\nI0622 13:30:20.876884 774 log.go:172] (0xc000a240b0) (0xc0005bc140) Create stream\nI0622 13:30:20.876892 774 log.go:172] (0xc000a240b0) (0xc0005bc140) Stream added, broadcasting: 3\nI0622 13:30:20.877914 774 log.go:172] (0xc000a240b0) Reply frame received for 3\nI0622 13:30:20.877965 774 log.go:172] (0xc000a240b0) (0xc0006b0000) Create stream\nI0622 13:30:20.877979 774 log.go:172] (0xc000a240b0) (0xc0006b0000) Stream added, broadcasting: 5\nI0622 13:30:20.878817 774 log.go:172] (0xc000a240b0) Reply frame received for 5\nI0622 13:30:20.982978 774 log.go:172] (0xc000a240b0) Data frame received for 5\nI0622 13:30:20.983009 774 log.go:172] (0xc0006b0000) (5) Data frame handling\nI0622 13:30:20.983032 774 log.go:172] (0xc0006b0000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0622 13:30:21.032968 774 log.go:172] (0xc000a240b0) Data frame received for 5\nI0622 13:30:21.033003 774 log.go:172] (0xc0006b0000) (5) Data frame handling\nI0622 13:30:21.033041 774 log.go:172] (0xc000a240b0) Data frame received for 3\nI0622 13:30:21.033053 774 log.go:172] (0xc0005bc140) (3) Data frame handling\nI0622 13:30:21.033064 774 log.go:172] (0xc0005bc140) (3) Data frame sent\nI0622 13:30:21.033078 774 log.go:172] (0xc000a240b0) Data frame received for 3\nI0622 13:30:21.033092 774 log.go:172] (0xc0005bc140) (3) Data frame handling\nI0622 13:30:21.035420 774 log.go:172] (0xc000a240b0) Data frame received for 1\nI0622 13:30:21.035454 774 log.go:172] (0xc0008e06e0) (1) Data frame handling\nI0622 13:30:21.035473 774 log.go:172] (0xc0008e06e0) (1) Data frame sent\nI0622 13:30:21.035501 774 log.go:172] (0xc000a240b0) (0xc0008e06e0) Stream removed, broadcasting: 1\nI0622 13:30:21.035543 774 log.go:172] (0xc000a240b0) Go away received\nI0622 13:30:21.035993 774 log.go:172] (0xc000a240b0) (0xc0008e06e0) Stream removed, broadcasting: 1\nI0622 13:30:21.036028 774 log.go:172] (0xc000a240b0) (0xc0005bc140) Stream removed, broadcasting: 3\nI0622 13:30:21.036045 774 log.go:172] (0xc000a240b0) (0xc0006b0000) Stream removed, broadcasting: 5\n" Jun 22 13:30:21.043: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 13:30:21.043: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 13:30:21.046: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 22 13:30:31.051: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 13:30:31.051: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 13:30:31.175: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 13:30:31.175: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC }] Jun 22 13:30:31.175: INFO: Jun 22 13:30:31.175: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 22 13:30:32.180: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.88470241s Jun 22 13:30:33.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.88009151s Jun 22 13:30:34.294: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.771040602s Jun 22 13:30:35.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.76640583s Jun 22 13:30:36.304: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.761140751s Jun 22 13:30:37.308: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.756296117s Jun 22 13:30:38.312: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.751497453s Jun 22 13:30:39.317: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.747735391s Jun 22 13:30:40.322: INFO: Verifying statefulset ss doesn't scale past 3 for another 743.325669ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3910 Jun 22 13:30:41.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3910 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 13:30:41.570: INFO: stderr: "I0622 13:30:41.461064 795 log.go:172] (0xc0009a4630) (0xc000688aa0) Create stream\nI0622 13:30:41.461306 795 log.go:172] (0xc0009a4630) (0xc000688aa0) Stream added, broadcasting: 1\nI0622 13:30:41.464059 795 log.go:172] (0xc0009a4630) Reply frame received for 1\nI0622 13:30:41.464085 795 log.go:172] (0xc0009a4630) (0xc0006881e0) Create stream\nI0622 13:30:41.464092 795 log.go:172] (0xc0009a4630) (0xc0006881e0) Stream added, broadcasting: 3\nI0622 13:30:41.467312 795 log.go:172] (0xc0009a4630) Reply frame received for 3\nI0622 13:30:41.467335 795 log.go:172] (0xc0009a4630) (0xc000688280) Create stream\nI0622 13:30:41.467342 795 log.go:172] (0xc0009a4630) (0xc000688280) Stream added, broadcasting: 5\nI0622 13:30:41.468040 795 log.go:172] (0xc0009a4630) Reply frame received for 5\nI0622 13:30:41.562232 795 log.go:172] (0xc0009a4630) Data frame received for 5\nI0622 13:30:41.562262 795 log.go:172] (0xc000688280) (5) Data frame handling\nI0622 13:30:41.562270 795 log.go:172] (0xc000688280) (5) Data frame sent\nI0622 13:30:41.562275 795 log.go:172] (0xc0009a4630) Data frame received for 5\nI0622 13:30:41.562278 795 log.go:172] (0xc000688280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0622 13:30:41.562294 795 log.go:172] (0xc0009a4630) Data frame received for 3\nI0622 13:30:41.562298 795 log.go:172] (0xc0006881e0) (3) Data frame handling\nI0622 13:30:41.562303 795 log.go:172] (0xc0006881e0) (3) Data frame sent\nI0622 13:30:41.562307 795 log.go:172] (0xc0009a4630) Data frame received for 3\nI0622 13:30:41.562311 795 log.go:172] (0xc0006881e0) (3) Data frame handling\nI0622 13:30:41.564000 795 log.go:172] (0xc0009a4630) Data frame received for 1\nI0622 13:30:41.564016 795 log.go:172] (0xc000688aa0) (1) Data frame handling\nI0622 13:30:41.564023 795 log.go:172] (0xc000688aa0) (1) Data frame sent\nI0622 13:30:41.564032 795 log.go:172] (0xc0009a4630) (0xc000688aa0) Stream removed, broadcasting: 1\nI0622 13:30:41.564038 795 log.go:172] (0xc0009a4630) Go away received\nI0622 13:30:41.564314 795 log.go:172] (0xc0009a4630) (0xc000688aa0) Stream removed, broadcasting: 1\nI0622 13:30:41.564329 795 log.go:172] (0xc0009a4630) (0xc0006881e0) Stream removed, broadcasting: 3\nI0622 13:30:41.564339 795 log.go:172] (0xc0009a4630) (0xc000688280) Stream removed, broadcasting: 5\n" Jun 22 13:30:41.570: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 13:30:41.570: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 13:30:41.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3910 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 13:30:42.087: INFO: stderr: "I0622 13:30:41.696142 815 log.go:172] (0xc0008c4420) (0xc000896960) Create stream\nI0622 13:30:41.696240 815 log.go:172] (0xc0008c4420) (0xc000896960) Stream added, broadcasting: 1\nI0622 13:30:41.706149 815 log.go:172] (0xc0008c4420) Reply frame received for 1\nI0622 13:30:41.706178 815 log.go:172] (0xc0008c4420) (0xc000896000) Create stream\nI0622 13:30:41.706185 815 log.go:172] (0xc0008c4420) (0xc000896000) Stream added, broadcasting: 3\nI0622 13:30:41.706870 815 log.go:172] (0xc0008c4420) Reply frame received for 3\nI0622 13:30:41.706892 815 log.go:172] (0xc0008c4420) (0xc0007fc320) Create stream\nI0622 13:30:41.706901 815 log.go:172] (0xc0008c4420) (0xc0007fc320) Stream added, broadcasting: 5\nI0622 13:30:41.707534 815 log.go:172] (0xc0008c4420) Reply frame received for 5\nI0622 13:30:42.081479 815 log.go:172] (0xc0008c4420) Data frame received for 3\nI0622 13:30:42.081527 815 log.go:172] (0xc000896000) (3) Data frame handling\nI0622 13:30:42.081572 815 log.go:172] (0xc000896000) (3) Data frame sent\nI0622 13:30:42.082039 815 log.go:172] (0xc0008c4420) Data frame received for 5\nI0622 13:30:42.082142 815 log.go:172] (0xc0007fc320) (5) Data frame handling\nI0622 13:30:42.082157 815 log.go:172] (0xc0007fc320) (5) Data frame sent\nI0622 13:30:42.082167 815 log.go:172] (0xc0008c4420) Data frame received for 5\nI0622 13:30:42.082176 815 log.go:172] (0xc0007fc320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0622 13:30:42.082200 815 log.go:172] (0xc0008c4420) Data frame received for 3\nI0622 13:30:42.082208 815 log.go:172] (0xc000896000) (3) Data frame handling\nI0622 13:30:42.082963 815 log.go:172] (0xc0008c4420) Data frame received for 1\nI0622 13:30:42.082976 815 log.go:172] (0xc000896960) (1) Data frame handling\nI0622 13:30:42.082985 815 log.go:172] (0xc000896960) (1) Data frame sent\nI0622 13:30:42.083132 815 log.go:172] (0xc0008c4420) (0xc000896960) Stream removed, broadcasting: 1\nI0622 13:30:42.083173 815 log.go:172] (0xc0008c4420) Go away received\nI0622 13:30:42.083509 815 log.go:172] (0xc0008c4420) (0xc000896960) Stream removed, broadcasting: 1\nI0622 13:30:42.083531 815 log.go:172] (0xc0008c4420) (0xc000896000) Stream removed, broadcasting: 3\nI0622 13:30:42.083542 815 log.go:172] (0xc0008c4420) (0xc0007fc320) Stream removed, broadcasting: 5\n" Jun 22 13:30:42.087: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 13:30:42.087: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 13:30:42.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3910 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 13:30:42.746: INFO: stderr: "I0622 13:30:42.672293 835 log.go:172] (0xc000964420) (0xc000524820) Create stream\nI0622 13:30:42.672342 835 log.go:172] (0xc000964420) (0xc000524820) Stream added, broadcasting: 1\nI0622 13:30:42.674927 835 log.go:172] (0xc000964420) Reply frame received for 1\nI0622 13:30:42.674952 835 log.go:172] (0xc000964420) (0xc000524000) Create stream\nI0622 13:30:42.674959 835 log.go:172] (0xc000964420) (0xc000524000) Stream added, broadcasting: 3\nI0622 13:30:42.675510 835 log.go:172] (0xc000964420) Reply frame received for 3\nI0622 13:30:42.675541 835 log.go:172] (0xc000964420) (0xc0006b6280) Create stream\nI0622 13:30:42.675552 835 log.go:172] (0xc000964420) (0xc0006b6280) Stream added, broadcasting: 5\nI0622 13:30:42.676178 835 log.go:172] (0xc000964420) Reply frame received for 5\nI0622 13:30:42.737923 835 log.go:172] (0xc000964420) Data frame received for 5\nI0622 13:30:42.737974 835 log.go:172] (0xc0006b6280) (5) Data frame handling\nI0622 13:30:42.737991 835 log.go:172] (0xc0006b6280) (5) Data frame sent\nI0622 13:30:42.738005 835 log.go:172] (0xc000964420) Data frame received for 5\nI0622 13:30:42.738017 835 log.go:172] (0xc0006b6280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0622 13:30:42.738035 835 log.go:172] (0xc000964420) Data frame received for 3\nI0622 13:30:42.738048 835 log.go:172] (0xc000524000) (3) Data frame handling\nI0622 13:30:42.738070 835 log.go:172] (0xc000524000) (3) Data frame sent\nI0622 13:30:42.738083 835 log.go:172] (0xc000964420) Data frame received for 3\nI0622 13:30:42.738093 835 log.go:172] (0xc000524000) (3) Data frame handling\nI0622 13:30:42.739757 835 log.go:172] (0xc000964420) Data frame received for 1\nI0622 13:30:42.739775 835 log.go:172] (0xc000524820) (1) Data frame handling\nI0622 13:30:42.739785 835 log.go:172] (0xc000524820) (1) Data frame sent\nI0622 13:30:42.739803 835 log.go:172] (0xc000964420) (0xc000524820) Stream removed, broadcasting: 1\nI0622 13:30:42.739824 835 log.go:172] (0xc000964420) Go away received\nI0622 13:30:42.740309 835 log.go:172] (0xc000964420) (0xc000524820) Stream removed, broadcasting: 1\nI0622 13:30:42.740329 835 log.go:172] (0xc000964420) (0xc000524000) Stream removed, broadcasting: 3\nI0622 13:30:42.740340 835 log.go:172] (0xc000964420) (0xc0006b6280) Stream removed, broadcasting: 5\n" Jun 22 13:30:42.746: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 13:30:42.746: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 13:30:42.750: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 13:30:42.750: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 13:30:42.750: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 22 13:30:42.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3910 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 13:30:42.961: INFO: stderr: "I0622 13:30:42.877833 856 log.go:172] (0xc0009d6420) (0xc0008106e0) Create stream\nI0622 13:30:42.877878 856 log.go:172] (0xc0009d6420) (0xc0008106e0) Stream added, broadcasting: 1\nI0622 13:30:42.879829 856 log.go:172] (0xc0009d6420) Reply frame received for 1\nI0622 13:30:42.879865 856 log.go:172] (0xc0009d6420) (0xc0007140a0) Create stream\nI0622 13:30:42.879879 856 log.go:172] (0xc0009d6420) (0xc0007140a0) Stream added, broadcasting: 3\nI0622 13:30:42.880787 856 log.go:172] (0xc0009d6420) Reply frame received for 3\nI0622 13:30:42.880822 856 log.go:172] (0xc0009d6420) (0xc000a0a000) Create stream\nI0622 13:30:42.880839 856 log.go:172] (0xc0009d6420) (0xc000a0a000) Stream added, broadcasting: 5\nI0622 13:30:42.881907 856 log.go:172] (0xc0009d6420) Reply frame received for 5\nI0622 13:30:42.954077 856 log.go:172] (0xc0009d6420) Data frame received for 5\nI0622 13:30:42.954111 856 log.go:172] (0xc000a0a000) (5) Data frame handling\nI0622 13:30:42.954122 856 log.go:172] (0xc000a0a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0622 13:30:42.954140 856 log.go:172] (0xc0009d6420) Data frame received for 3\nI0622 13:30:42.954150 856 log.go:172] (0xc0007140a0) (3) Data frame handling\nI0622 13:30:42.954159 856 log.go:172] (0xc0007140a0) (3) Data frame sent\nI0622 13:30:42.954168 856 log.go:172] (0xc0009d6420) Data frame received for 3\nI0622 13:30:42.954175 856 log.go:172] (0xc0007140a0) (3) Data frame handling\nI0622 13:30:42.954242 856 log.go:172] (0xc0009d6420) Data frame received for 5\nI0622 13:30:42.954253 856 log.go:172] (0xc000a0a000) (5) Data frame handling\nI0622 13:30:42.955505 856 log.go:172] (0xc0009d6420) Data frame received for 1\nI0622 13:30:42.955521 856 log.go:172] (0xc0008106e0) (1) Data frame handling\nI0622 13:30:42.955537 856 log.go:172] (0xc0008106e0) (1) Data frame sent\nI0622 13:30:42.955717 856 log.go:172] (0xc0009d6420) (0xc0008106e0) Stream removed, broadcasting: 1\nI0622 13:30:42.955750 856 log.go:172] (0xc0009d6420) Go away received\nI0622 13:30:42.955950 856 log.go:172] (0xc0009d6420) (0xc0008106e0) Stream removed, broadcasting: 1\nI0622 13:30:42.955961 856 log.go:172] (0xc0009d6420) (0xc0007140a0) Stream removed, broadcasting: 3\nI0622 13:30:42.955966 856 log.go:172] (0xc0009d6420) (0xc000a0a000) Stream removed, broadcasting: 5\n" Jun 22 13:30:42.961: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 13:30:42.961: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 13:30:42.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3910 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 13:30:43.179: INFO: stderr: "I0622 13:30:43.078991 876 log.go:172] (0xc000906420) (0xc000a24780) Create stream\nI0622 13:30:43.079051 876 log.go:172] (0xc000906420) (0xc000a24780) Stream added, broadcasting: 1\nI0622 13:30:43.081703 876 log.go:172] (0xc000906420) Reply frame received for 1\nI0622 13:30:43.081752 876 log.go:172] (0xc000906420) (0xc000580320) Create stream\nI0622 13:30:43.081765 876 log.go:172] (0xc000906420) (0xc000580320) Stream added, broadcasting: 3\nI0622 13:30:43.082893 876 log.go:172] (0xc000906420) Reply frame received for 3\nI0622 13:30:43.082944 876 log.go:172] (0xc000906420) (0xc000a24820) Create stream\nI0622 13:30:43.082958 876 log.go:172] (0xc000906420) (0xc000a24820) Stream added, broadcasting: 5\nI0622 13:30:43.083972 876 log.go:172] (0xc000906420) Reply frame received for 5\nI0622 13:30:43.151048 876 log.go:172] (0xc000906420) Data frame received for 5\nI0622 13:30:43.151078 876 log.go:172] (0xc000a24820) (5) Data frame handling\nI0622 13:30:43.151100 876 log.go:172] (0xc000a24820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0622 13:30:43.170623 876 log.go:172] (0xc000906420) Data frame received for 3\nI0622 13:30:43.170651 876 log.go:172] (0xc000580320) (3) Data frame handling\nI0622 13:30:43.170673 876 log.go:172] (0xc000580320) (3) Data frame sent\nI0622 13:30:43.170862 876 log.go:172] (0xc000906420) Data frame received for 5\nI0622 13:30:43.170887 876 log.go:172] (0xc000a24820) (5) Data frame handling\nI0622 13:30:43.171000 876 log.go:172] (0xc000906420) Data frame received for 3\nI0622 13:30:43.171019 876 log.go:172] (0xc000580320) (3) Data frame handling\nI0622 13:30:43.172613 876 log.go:172] (0xc000906420) Data frame received for 1\nI0622 13:30:43.172631 876 log.go:172] (0xc000a24780) (1) Data frame handling\nI0622 13:30:43.172641 876 log.go:172] (0xc000a24780) (1) Data frame sent\nI0622 13:30:43.172656 876 log.go:172] (0xc000906420) (0xc000a24780) Stream removed, broadcasting: 1\nI0622 13:30:43.172759 876 log.go:172] (0xc000906420) Go away received\nI0622 13:30:43.172977 876 log.go:172] (0xc000906420) (0xc000a24780) Stream removed, broadcasting: 1\nI0622 13:30:43.172993 876 log.go:172] (0xc000906420) (0xc000580320) Stream removed, broadcasting: 3\nI0622 13:30:43.173004 876 log.go:172] (0xc000906420) (0xc000a24820) Stream removed, broadcasting: 5\n" Jun 22 13:30:43.180: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 13:30:43.180: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 13:30:43.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3910 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 13:30:43.496: INFO: stderr: "I0622 13:30:43.334381 897 log.go:172] (0xc00094a370) (0xc0008ca640) Create stream\nI0622 13:30:43.334443 897 log.go:172] (0xc00094a370) (0xc0008ca640) Stream added, broadcasting: 1\nI0622 13:30:43.336894 897 log.go:172] (0xc00094a370) Reply frame received for 1\nI0622 13:30:43.336930 897 log.go:172] (0xc00094a370) (0xc00090e000) Create stream\nI0622 13:30:43.336948 897 log.go:172] (0xc00094a370) (0xc00090e000) Stream added, broadcasting: 3\nI0622 13:30:43.338390 897 log.go:172] (0xc00094a370) Reply frame received for 3\nI0622 13:30:43.338434 897 log.go:172] (0xc00094a370) (0xc0008ca6e0) Create stream\nI0622 13:30:43.338452 897 log.go:172] (0xc00094a370) (0xc0008ca6e0) Stream added, broadcasting: 5\nI0622 13:30:43.339513 897 log.go:172] (0xc00094a370) Reply frame received for 5\nI0622 13:30:43.400421 897 log.go:172] (0xc00094a370) Data frame received for 5\nI0622 13:30:43.400450 897 log.go:172] (0xc0008ca6e0) (5) Data frame handling\nI0622 13:30:43.400466 897 log.go:172] (0xc0008ca6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0622 13:30:43.486240 897 log.go:172] (0xc00094a370) Data frame received for 3\nI0622 13:30:43.486310 897 log.go:172] (0xc00090e000) (3) Data frame handling\nI0622 13:30:43.486354 897 log.go:172] (0xc00090e000) (3) Data frame sent\nI0622 13:30:43.486590 897 log.go:172] (0xc00094a370) Data frame received for 5\nI0622 13:30:43.486611 897 log.go:172] (0xc0008ca6e0) (5) Data frame handling\nI0622 13:30:43.486749 897 log.go:172] (0xc00094a370) Data frame received for 3\nI0622 13:30:43.486798 897 log.go:172] (0xc00090e000) (3) Data frame handling\nI0622 13:30:43.488570 897 log.go:172] (0xc00094a370) Data frame received for 1\nI0622 13:30:43.488587 897 log.go:172] (0xc0008ca640) (1) Data frame handling\nI0622 13:30:43.488595 897 log.go:172] (0xc0008ca640) (1) Data frame sent\nI0622 13:30:43.488606 897 log.go:172] (0xc00094a370) (0xc0008ca640) Stream removed, broadcasting: 1\nI0622 13:30:43.488622 897 log.go:172] (0xc00094a370) Go away received\nI0622 13:30:43.488898 897 log.go:172] (0xc00094a370) (0xc0008ca640) Stream removed, broadcasting: 1\nI0622 13:30:43.488916 897 log.go:172] (0xc00094a370) (0xc00090e000) Stream removed, broadcasting: 3\nI0622 13:30:43.488929 897 log.go:172] (0xc00094a370) (0xc0008ca6e0) Stream removed, broadcasting: 5\n" Jun 22 13:30:43.496: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 13:30:43.496: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 13:30:43.496: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 13:30:43.499: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 22 13:30:53.506: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 22 13:30:53.506: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 22 13:30:53.506: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 22 13:30:53.530: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 13:30:53.530: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC }] Jun 22 13:30:53.530: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:30:53.530: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:30:53.530: INFO: Jun 22 13:30:53.530: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 13:30:55.213: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 13:30:55.213: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC }] Jun 22 13:30:55.213: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:30:55.213: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:30:55.213: INFO: Jun 22 13:30:55.213: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 13:30:56.236: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 13:30:56.236: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC }] Jun 22 13:30:56.236: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:30:56.236: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:30:56.236: INFO: Jun 22 13:30:56.236: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 13:30:57.247: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 13:30:57.247: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC }] Jun 22 13:30:57.247: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:30:57.247: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:30:57.247: INFO: Jun 22 13:30:57.247: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 13:30:58.294: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 13:30:58.294: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC }] Jun 22 13:30:58.294: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:30:58.294: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:30:58.294: INFO: Jun 22 13:30:58.294: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 22 13:30:59.298: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 13:30:59.298: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC }] Jun 22 13:30:59.298: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:30:59.298: INFO: Jun 22 13:30:59.298: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 22 13:31:00.304: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 13:31:00.304: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC }] Jun 22 13:31:00.304: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:31:00.304: INFO: Jun 22 13:31:00.304: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 22 13:31:01.313: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 13:31:01.313: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC }] Jun 22 13:31:01.313: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:31:01.313: INFO: Jun 22 13:31:01.313: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 22 13:31:02.451: INFO: POD NODE PHASE GRACE CONDITIONS Jun 22 13:31:02.451: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:10 +0000 UTC }] Jun 22 13:31:02.451: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:30:31 +0000 UTC }] Jun 22 13:31:02.451: INFO: Jun 22 13:31:02.451: INFO: StatefulSet ss has not reached scale 0, at 2 Jun 22 13:31:03.510: INFO: Verifying statefulset ss doesn't scale past 0 for another 61.990653ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3910 Jun 22 13:31:04.606: INFO: Scaling statefulset ss to 0 Jun 22 13:31:04.615: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 22 13:31:04.617: INFO: Deleting all statefulset in ns statefulset-3910 Jun 22 13:31:04.620: INFO: Scaling statefulset ss to 0 Jun 22 13:31:04.627: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 13:31:04.629: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:31:04.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3910" for this suite. Jun 22 13:31:12.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:31:12.952: INFO: namespace statefulset-3910 deletion completed in 8.284274489s • [SLOW TEST:62.559 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:31:12.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 22 13:31:25.307: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 13:31:25.310: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 13:31:27.310: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 13:31:27.315: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 13:31:29.310: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 13:31:29.325: INFO: Pod pod-with-prestop-http-hook still exists Jun 22 13:31:31.310: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 22 13:31:31.314: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:31:31.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8785" for this suite. Jun 22 13:31:53.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:31:53.913: INFO: namespace container-lifecycle-hook-8785 deletion completed in 22.589009863s • [SLOW TEST:40.960 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:31:53.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-22n8 STEP: Creating a pod to test atomic-volume-subpath Jun 22 13:31:54.197: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-22n8" in namespace "subpath-3908" to be "success or failure" Jun 22 13:31:54.232: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Pending", Reason="", readiness=false. Elapsed: 35.222808ms Jun 22 13:31:56.236: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039832268s Jun 22 13:31:58.241: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044008664s Jun 22 13:32:00.245: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Running", Reason="", readiness=true. Elapsed: 6.048686037s Jun 22 13:32:02.250: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Running", Reason="", readiness=true. Elapsed: 8.053017532s Jun 22 13:32:04.254: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Running", Reason="", readiness=true. Elapsed: 10.057843545s Jun 22 13:32:06.259: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Running", Reason="", readiness=true. Elapsed: 12.062348791s Jun 22 13:32:08.263: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Running", Reason="", readiness=true. Elapsed: 14.066625637s Jun 22 13:32:10.268: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Running", Reason="", readiness=true. Elapsed: 16.071130675s Jun 22 13:32:12.272: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Running", Reason="", readiness=true. Elapsed: 18.075034579s Jun 22 13:32:14.279: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Running", Reason="", readiness=true. Elapsed: 20.08193238s Jun 22 13:32:16.283: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Running", Reason="", readiness=true. Elapsed: 22.0866627s Jun 22 13:32:18.288: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Running", Reason="", readiness=true. Elapsed: 24.091068278s Jun 22 13:32:20.292: INFO: Pod "pod-subpath-test-configmap-22n8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.095361475s STEP: Saw pod success Jun 22 13:32:20.292: INFO: Pod "pod-subpath-test-configmap-22n8" satisfied condition "success or failure" Jun 22 13:32:20.296: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-22n8 container test-container-subpath-configmap-22n8: STEP: delete the pod Jun 22 13:32:20.336: INFO: Waiting for pod pod-subpath-test-configmap-22n8 to disappear Jun 22 13:32:20.365: INFO: Pod pod-subpath-test-configmap-22n8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-22n8 Jun 22 13:32:20.365: INFO: Deleting pod "pod-subpath-test-configmap-22n8" in namespace "subpath-3908" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:32:20.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3908" for this suite. Jun 22 13:32:26.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:32:26.459: INFO: namespace subpath-3908 deletion completed in 6.088010754s • [SLOW TEST:32.545 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:32:26.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jun 22 13:32:26.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3682' Jun 22 13:32:26.939: INFO: stderr: "" Jun 22 13:32:26.939: INFO: stdout: "pod/pause created\n" Jun 22 13:32:26.939: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 22 13:32:26.939: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3682" to be "running and ready" Jun 22 13:32:27.015: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 75.617092ms Jun 22 13:32:29.020: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080657222s Jun 22 13:32:31.024: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085068687s Jun 22 13:32:33.028: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.08881456s Jun 22 13:32:33.028: INFO: Pod "pause" satisfied condition "running and ready" Jun 22 13:32:33.028: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jun 22 13:32:33.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3682' Jun 22 13:32:33.125: INFO: stderr: "" Jun 22 13:32:33.125: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 22 13:32:33.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3682' Jun 22 13:32:33.217: INFO: stderr: "" Jun 22 13:32:33.217: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 22 13:32:33.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3682' Jun 22 13:32:33.379: INFO: stderr: "" Jun 22 13:32:33.379: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 22 13:32:33.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3682' Jun 22 13:32:33.471: INFO: stderr: "" Jun 22 13:32:33.471: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jun 22 13:32:33.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3682' Jun 22 13:32:33.616: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 13:32:33.616: INFO: stdout: "pod \"pause\" force deleted\n" Jun 22 13:32:33.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3682' Jun 22 13:32:33.717: INFO: stderr: "No resources found.\n" Jun 22 13:32:33.717: INFO: stdout: "" Jun 22 13:32:33.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3682 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 13:32:33.813: INFO: stderr: "" Jun 22 13:32:33.814: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:32:33.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3682" for this suite. Jun 22 13:32:39.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:32:39.973: INFO: namespace kubectl-3682 deletion completed in 6.15588357s • [SLOW TEST:13.514 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:32:39.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-731.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-731.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-731.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-731.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-731.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-731.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-731.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-731.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-731.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-731.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-731.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.210.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.210.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.210.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.210.193_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-731.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-731.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-731.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-731.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-731.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-731.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-731.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-731.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-731.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-731.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-731.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.210.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.210.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.210.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.210.193_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 13:32:50.586: INFO: Unable to read wheezy_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:50.590: INFO: Unable to read wheezy_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:50.593: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:50.595: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:50.639: INFO: Unable to read jessie_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:50.642: INFO: Unable to read jessie_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:50.645: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:50.648: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:50.663: INFO: Lookups using dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1 failed for: [wheezy_udp@dns-test-service.dns-731.svc.cluster.local wheezy_tcp@dns-test-service.dns-731.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_udp@dns-test-service.dns-731.svc.cluster.local jessie_tcp@dns-test-service.dns-731.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local] Jun 22 13:32:55.692: INFO: Unable to read wheezy_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:55.694: INFO: Unable to read wheezy_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:55.696: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:55.698: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:55.720: INFO: Unable to read jessie_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:55.722: INFO: Unable to read jessie_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:55.724: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:55.727: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:32:55.750: INFO: Lookups using dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1 failed for: [wheezy_udp@dns-test-service.dns-731.svc.cluster.local wheezy_tcp@dns-test-service.dns-731.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_udp@dns-test-service.dns-731.svc.cluster.local jessie_tcp@dns-test-service.dns-731.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local] Jun 22 13:33:00.668: INFO: Unable to read wheezy_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:00.672: INFO: Unable to read wheezy_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:00.675: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:00.678: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:00.697: INFO: Unable to read jessie_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:00.699: INFO: Unable to read jessie_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:00.702: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:00.704: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:00.721: INFO: Lookups using dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1 failed for: [wheezy_udp@dns-test-service.dns-731.svc.cluster.local wheezy_tcp@dns-test-service.dns-731.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_udp@dns-test-service.dns-731.svc.cluster.local jessie_tcp@dns-test-service.dns-731.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local] Jun 22 13:33:05.668: INFO: Unable to read wheezy_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:05.672: INFO: Unable to read wheezy_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:05.675: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:05.677: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:05.733: INFO: Unable to read jessie_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:05.736: INFO: Unable to read jessie_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:05.739: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:05.741: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:05.864: INFO: Lookups using dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1 failed for: [wheezy_udp@dns-test-service.dns-731.svc.cluster.local wheezy_tcp@dns-test-service.dns-731.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_udp@dns-test-service.dns-731.svc.cluster.local jessie_tcp@dns-test-service.dns-731.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local] Jun 22 13:33:10.668: INFO: Unable to read wheezy_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:10.671: INFO: Unable to read wheezy_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:10.675: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:10.678: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:10.700: INFO: Unable to read jessie_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:10.703: INFO: Unable to read jessie_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:10.705: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:10.708: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:10.726: INFO: Lookups using dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1 failed for: [wheezy_udp@dns-test-service.dns-731.svc.cluster.local wheezy_tcp@dns-test-service.dns-731.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_udp@dns-test-service.dns-731.svc.cluster.local jessie_tcp@dns-test-service.dns-731.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local] Jun 22 13:33:15.668: INFO: Unable to read wheezy_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:15.672: INFO: Unable to read wheezy_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:15.675: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:15.678: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:15.707: INFO: Unable to read jessie_udp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:15.709: INFO: Unable to read jessie_tcp@dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:15.712: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:15.714: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local from pod dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1: the server could not find the requested resource (get pods dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1) Jun 22 13:33:15.731: INFO: Lookups using dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1 failed for: [wheezy_udp@dns-test-service.dns-731.svc.cluster.local wheezy_tcp@dns-test-service.dns-731.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_udp@dns-test-service.dns-731.svc.cluster.local jessie_tcp@dns-test-service.dns-731.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-731.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-731.svc.cluster.local] Jun 22 13:33:20.739: INFO: DNS probes using dns-731/dns-test-7722ade0-c584-4c45-9623-30483a5cbfc1 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:33:21.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-731" for this suite. Jun 22 13:33:27.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:33:28.154: INFO: namespace dns-731 deletion completed in 6.249707671s • [SLOW TEST:48.180 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:33:28.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 22 13:33:28.316: INFO: PodSpec: initContainers in spec.initContainers Jun 22 13:34:24.086: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c169857b-8dc1-4e08-9d6c-df6f7c45b259", GenerateName:"", Namespace:"init-container-8240", SelfLink:"/api/v1/namespaces/init-container-8240/pods/pod-init-c169857b-8dc1-4e08-9d6c-df6f7c45b259", UID:"6c19cef5-2b12-44b2-a286-4e38217ed816", ResourceVersion:"17859046", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63728429608, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"316205106"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-cswfx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00138a180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cswfx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cswfx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cswfx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002dda088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0019b2000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002dda110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002dda130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002dda138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002dda13c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429608, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429608, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429608, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728429608, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.159", StartTime:(*v1.Time)(0xc00165e220), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00165e320), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002114070)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://8e4ad41ddc9e95049a24d4f9924a04f94d38a10d594684f19fa836c69d89c14d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00165e3a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00165e260), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:34:24.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8240" for this suite. Jun 22 13:34:46.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:34:46.420: INFO: namespace init-container-8240 deletion completed in 22.241272476s • [SLOW TEST:78.266 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:34:46.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-d3c23ee6-263b-492f-a597-0eb614097576 Jun 22 13:34:46.540: INFO: Pod name my-hostname-basic-d3c23ee6-263b-492f-a597-0eb614097576: Found 0 pods out of 1 Jun 22 13:34:51.545: INFO: Pod name my-hostname-basic-d3c23ee6-263b-492f-a597-0eb614097576: Found 1 pods out of 1 Jun 22 13:34:51.545: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d3c23ee6-263b-492f-a597-0eb614097576" are running Jun 22 13:34:53.552: INFO: Pod "my-hostname-basic-d3c23ee6-263b-492f-a597-0eb614097576-xpnb6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 13:34:46 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 13:34:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d3c23ee6-263b-492f-a597-0eb614097576]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 13:34:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d3c23ee6-263b-492f-a597-0eb614097576]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-22 13:34:46 +0000 UTC Reason: Message:}]) Jun 22 13:34:53.552: INFO: Trying to dial the pod Jun 22 13:34:58.582: INFO: Controller my-hostname-basic-d3c23ee6-263b-492f-a597-0eb614097576: Got expected result from replica 1 [my-hostname-basic-d3c23ee6-263b-492f-a597-0eb614097576-xpnb6]: "my-hostname-basic-d3c23ee6-263b-492f-a597-0eb614097576-xpnb6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:34:58.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3907" for this suite. Jun 22 13:35:04.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:35:04.685: INFO: namespace replication-controller-3907 deletion completed in 6.10000856s • [SLOW TEST:18.264 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:35:04.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jun 22 13:35:04.836: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix744793266/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:35:04.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6021" for this suite. Jun 22 13:35:10.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:35:11.071: INFO: namespace kubectl-6021 deletion completed in 6.119791942s • [SLOW TEST:6.386 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:35:11.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 22 13:35:11.305: INFO: Waiting up to 5m0s for pod "pod-15977d11-947e-4524-8ab8-fa84ecb11790" in namespace "emptydir-9107" to be "success or failure" Jun 22 13:35:11.357: INFO: Pod "pod-15977d11-947e-4524-8ab8-fa84ecb11790": Phase="Pending", Reason="", readiness=false. Elapsed: 51.980177ms Jun 22 13:35:13.361: INFO: Pod "pod-15977d11-947e-4524-8ab8-fa84ecb11790": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056197046s Jun 22 13:35:15.562: INFO: Pod "pod-15977d11-947e-4524-8ab8-fa84ecb11790": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257163154s Jun 22 13:35:17.566: INFO: Pod "pod-15977d11-947e-4524-8ab8-fa84ecb11790": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.260532491s STEP: Saw pod success Jun 22 13:35:17.566: INFO: Pod "pod-15977d11-947e-4524-8ab8-fa84ecb11790" satisfied condition "success or failure" Jun 22 13:35:17.568: INFO: Trying to get logs from node iruya-worker pod pod-15977d11-947e-4524-8ab8-fa84ecb11790 container test-container: STEP: delete the pod Jun 22 13:35:17.603: INFO: Waiting for pod pod-15977d11-947e-4524-8ab8-fa84ecb11790 to disappear Jun 22 13:35:17.632: INFO: Pod pod-15977d11-947e-4524-8ab8-fa84ecb11790 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:35:17.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9107" for this suite. Jun 22 13:35:23.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:35:23.811: INFO: namespace emptydir-9107 deletion completed in 6.176311361s • [SLOW TEST:12.739 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:35:23.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jun 22 13:35:23.965: INFO: Waiting up to 5m0s for pod "client-containers-43613773-1db7-46f5-aa93-4c6bfcfb0464" in namespace "containers-2929" to be "success or failure" Jun 22 13:35:23.978: INFO: Pod "client-containers-43613773-1db7-46f5-aa93-4c6bfcfb0464": Phase="Pending", Reason="", readiness=false. Elapsed: 13.58924ms Jun 22 13:35:25.983: INFO: Pod "client-containers-43613773-1db7-46f5-aa93-4c6bfcfb0464": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017833849s Jun 22 13:35:28.305: INFO: Pod "client-containers-43613773-1db7-46f5-aa93-4c6bfcfb0464": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340198297s Jun 22 13:35:30.310: INFO: Pod "client-containers-43613773-1db7-46f5-aa93-4c6bfcfb0464": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.344782409s STEP: Saw pod success Jun 22 13:35:30.310: INFO: Pod "client-containers-43613773-1db7-46f5-aa93-4c6bfcfb0464" satisfied condition "success or failure" Jun 22 13:35:30.313: INFO: Trying to get logs from node iruya-worker2 pod client-containers-43613773-1db7-46f5-aa93-4c6bfcfb0464 container test-container: STEP: delete the pod Jun 22 13:35:30.591: INFO: Waiting for pod client-containers-43613773-1db7-46f5-aa93-4c6bfcfb0464 to disappear Jun 22 13:35:30.687: INFO: Pod client-containers-43613773-1db7-46f5-aa93-4c6bfcfb0464 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:35:30.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2929" for this suite. Jun 22 13:35:36.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:35:36.871: INFO: namespace containers-2929 deletion completed in 6.180068105s • [SLOW TEST:13.060 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:35:36.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 22 13:35:37.146: INFO: Waiting up to 5m0s for pod "pod-e86b7e16-1488-451e-8f61-d8cca100d636" in namespace "emptydir-8622" to be "success or failure" Jun 22 13:35:37.176: INFO: Pod "pod-e86b7e16-1488-451e-8f61-d8cca100d636": Phase="Pending", Reason="", readiness=false. Elapsed: 30.461594ms Jun 22 13:35:39.180: INFO: Pod "pod-e86b7e16-1488-451e-8f61-d8cca100d636": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034139284s Jun 22 13:35:41.184: INFO: Pod "pod-e86b7e16-1488-451e-8f61-d8cca100d636": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038676722s Jun 22 13:35:43.188: INFO: Pod "pod-e86b7e16-1488-451e-8f61-d8cca100d636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042136531s STEP: Saw pod success Jun 22 13:35:43.188: INFO: Pod "pod-e86b7e16-1488-451e-8f61-d8cca100d636" satisfied condition "success or failure" Jun 22 13:35:43.192: INFO: Trying to get logs from node iruya-worker2 pod pod-e86b7e16-1488-451e-8f61-d8cca100d636 container test-container: STEP: delete the pod Jun 22 13:35:43.543: INFO: Waiting for pod pod-e86b7e16-1488-451e-8f61-d8cca100d636 to disappear Jun 22 13:35:43.652: INFO: Pod pod-e86b7e16-1488-451e-8f61-d8cca100d636 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:35:43.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8622" for this suite. Jun 22 13:35:49.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:35:49.851: INFO: namespace emptydir-8622 deletion completed in 6.195780689s • [SLOW TEST:12.979 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:35:49.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 13:35:50.000: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90aa41a9-075e-4acc-b01b-0ecbfdbc7076" in namespace "downward-api-2918" to be "success or failure" Jun 22 13:35:50.009: INFO: Pod "downwardapi-volume-90aa41a9-075e-4acc-b01b-0ecbfdbc7076": Phase="Pending", Reason="", readiness=false. Elapsed: 9.136202ms Jun 22 13:35:52.013: INFO: Pod "downwardapi-volume-90aa41a9-075e-4acc-b01b-0ecbfdbc7076": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013025507s Jun 22 13:35:54.066: INFO: Pod "downwardapi-volume-90aa41a9-075e-4acc-b01b-0ecbfdbc7076": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065407165s Jun 22 13:35:56.070: INFO: Pod "downwardapi-volume-90aa41a9-075e-4acc-b01b-0ecbfdbc7076": Phase="Running", Reason="", readiness=true. Elapsed: 6.070052206s Jun 22 13:35:58.074: INFO: Pod "downwardapi-volume-90aa41a9-075e-4acc-b01b-0ecbfdbc7076": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07373778s STEP: Saw pod success Jun 22 13:35:58.074: INFO: Pod "downwardapi-volume-90aa41a9-075e-4acc-b01b-0ecbfdbc7076" satisfied condition "success or failure" Jun 22 13:35:58.078: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-90aa41a9-075e-4acc-b01b-0ecbfdbc7076 container client-container: STEP: delete the pod Jun 22 13:35:58.157: INFO: Waiting for pod downwardapi-volume-90aa41a9-075e-4acc-b01b-0ecbfdbc7076 to disappear Jun 22 13:35:58.237: INFO: Pod downwardapi-volume-90aa41a9-075e-4acc-b01b-0ecbfdbc7076 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:35:58.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2918" for this suite. Jun 22 13:36:04.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:36:04.374: INFO: namespace downward-api-2918 deletion completed in 6.13313328s • [SLOW TEST:14.521 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:36:04.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 22 13:36:04.563: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 22 13:36:04.626: INFO: Waiting for terminating namespaces to be deleted... Jun 22 13:36:04.628: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 22 13:36:04.633: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 22 13:36:04.633: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 13:36:04.633: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 22 13:36:04.633: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 13:36:04.633: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 22 13:36:04.638: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 22 13:36:04.638: INFO: Container coredns ready: true, restart count 0 Jun 22 13:36:04.638: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 22 13:36:04.638: INFO: Container coredns ready: true, restart count 0 Jun 22 13:36:04.638: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 22 13:36:04.638: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 13:36:04.638: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 22 13:36:04.638: INFO: Container kindnet-cni ready: true, restart count 2 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161ae12464bc75ee], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:36:05.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4010" for this suite. Jun 22 13:36:11.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:36:11.804: INFO: namespace sched-pred-4010 deletion completed in 6.142930086s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.430 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:36:11.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 22 13:36:11.982: INFO: Waiting up to 5m0s for pod "pod-7649f2de-d35e-4c53-8efc-0832ba72e6f9" in namespace "emptydir-1865" to be "success or failure" Jun 22 13:36:11.998: INFO: Pod "pod-7649f2de-d35e-4c53-8efc-0832ba72e6f9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.524256ms Jun 22 13:36:14.001: INFO: Pod "pod-7649f2de-d35e-4c53-8efc-0832ba72e6f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018953543s Jun 22 13:36:16.010: INFO: Pod "pod-7649f2de-d35e-4c53-8efc-0832ba72e6f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027708814s Jun 22 13:36:18.013: INFO: Pod "pod-7649f2de-d35e-4c53-8efc-0832ba72e6f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03141895s STEP: Saw pod success Jun 22 13:36:18.013: INFO: Pod "pod-7649f2de-d35e-4c53-8efc-0832ba72e6f9" satisfied condition "success or failure" Jun 22 13:36:18.066: INFO: Trying to get logs from node iruya-worker2 pod pod-7649f2de-d35e-4c53-8efc-0832ba72e6f9 container test-container: STEP: delete the pod Jun 22 13:36:18.107: INFO: Waiting for pod pod-7649f2de-d35e-4c53-8efc-0832ba72e6f9 to disappear Jun 22 13:36:18.128: INFO: Pod pod-7649f2de-d35e-4c53-8efc-0832ba72e6f9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:36:18.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1865" for this suite. Jun 22 13:36:24.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:36:24.224: INFO: namespace emptydir-1865 deletion completed in 6.091442449s • [SLOW TEST:12.419 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:36:24.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 22 13:36:24.343: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 22 13:36:24.350: INFO: Waiting for terminating namespaces to be deleted... Jun 22 13:36:24.352: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 22 13:36:24.356: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 22 13:36:24.356: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 13:36:24.356: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 22 13:36:24.356: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 13:36:24.356: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 22 13:36:24.360: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 22 13:36:24.360: INFO: Container coredns ready: true, restart count 0 Jun 22 13:36:24.360: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 22 13:36:24.360: INFO: Container coredns ready: true, restart count 0 Jun 22 13:36:24.360: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 22 13:36:24.360: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 13:36:24.360: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 22 13:36:24.360: INFO: Container kindnet-cni ready: true, restart count 2 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ac085b88-0dfc-47a4-a869-9fafe98c9ab7 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ac085b88-0dfc-47a4-a869-9fafe98c9ab7 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-ac085b88-0dfc-47a4-a869-9fafe98c9ab7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:36:32.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2960" for this suite. Jun 22 13:37:02.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:37:02.750: INFO: namespace sched-pred-2960 deletion completed in 30.158300586s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:38.526 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:37:02.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 13:37:02.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-6362' Jun 22 13:37:06.228: INFO: stderr: "" Jun 22 13:37:06.228: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jun 22 13:37:16.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-6362 -o json' Jun 22 13:37:16.375: INFO: stderr: "" Jun 22 13:37:16.375: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-22T13:37:06Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-6362\",\n \"resourceVersion\": \"17859612\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6362/pods/e2e-test-nginx-pod\",\n \"uid\": \"69bb382d-5004-4347-94e7-6e0f938af60a\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-lmwbr\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-lmwbr\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-lmwbr\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T13:37:06Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T13:37:12Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T13:37:12Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-22T13:37:06Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://1cc9d23d292ef4b79c01e2386483226bfb2f3e1b0dd95e3414312ed470413b08\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-22T13:37:11Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.76\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-22T13:37:06Z\"\n }\n}\n" STEP: replace the image in the pod Jun 22 13:37:16.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6362' Jun 22 13:37:16.705: INFO: stderr: "" Jun 22 13:37:16.705: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jun 22 13:37:16.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6362' Jun 22 13:37:22.349: INFO: stderr: "" Jun 22 13:37:22.349: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:37:22.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6362" for this suite. Jun 22 13:37:28.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:37:28.555: INFO: namespace kubectl-6362 deletion completed in 6.20214392s • [SLOW TEST:25.804 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:37:28.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-f4b98d70-6ac0-4846-80db-1a3af4d31f0e STEP: Creating secret with name secret-projected-all-test-volume-96d8c960-e101-40b7-8ed3-9f95ab30df8b STEP: Creating a pod to test Check all projections for projected volume plugin Jun 22 13:37:28.690: INFO: Waiting up to 5m0s for pod "projected-volume-42dda776-cd56-442d-acb4-34d72a3f4f44" in namespace "projected-3462" to be "success or failure" Jun 22 13:37:28.726: INFO: Pod "projected-volume-42dda776-cd56-442d-acb4-34d72a3f4f44": Phase="Pending", Reason="", readiness=false. Elapsed: 35.909027ms Jun 22 13:37:30.731: INFO: Pod "projected-volume-42dda776-cd56-442d-acb4-34d72a3f4f44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040402234s Jun 22 13:37:32.971: INFO: Pod "projected-volume-42dda776-cd56-442d-acb4-34d72a3f4f44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280869227s Jun 22 13:37:34.975: INFO: Pod "projected-volume-42dda776-cd56-442d-acb4-34d72a3f4f44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.284785457s STEP: Saw pod success Jun 22 13:37:34.975: INFO: Pod "projected-volume-42dda776-cd56-442d-acb4-34d72a3f4f44" satisfied condition "success or failure" Jun 22 13:37:34.977: INFO: Trying to get logs from node iruya-worker pod projected-volume-42dda776-cd56-442d-acb4-34d72a3f4f44 container projected-all-volume-test: STEP: delete the pod Jun 22 13:37:35.008: INFO: Waiting for pod projected-volume-42dda776-cd56-442d-acb4-34d72a3f4f44 to disappear Jun 22 13:37:35.102: INFO: Pod projected-volume-42dda776-cd56-442d-acb4-34d72a3f4f44 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:37:35.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3462" for this suite. Jun 22 13:37:41.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:37:41.230: INFO: namespace projected-3462 deletion completed in 6.123966493s • [SLOW TEST:12.676 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:37:41.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 22 13:37:47.575: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:37:47.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7198" for this suite. Jun 22 13:37:53.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:37:53.713: INFO: namespace container-runtime-7198 deletion completed in 6.096194409s • [SLOW TEST:12.483 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:37:53.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 22 13:38:00.918: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:38:01.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2178" for this suite. Jun 22 13:38:38.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:38:38.118: INFO: namespace replicaset-2178 deletion completed in 36.139586725s • [SLOW TEST:44.404 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:38:38.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-ssr6 STEP: Creating a pod to test atomic-volume-subpath Jun 22 13:38:38.367: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-ssr6" in namespace "subpath-9233" to be "success or failure" Jun 22 13:38:38.399: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Pending", Reason="", readiness=false. Elapsed: 31.963706ms Jun 22 13:38:40.498: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130894652s Jun 22 13:38:42.502: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134973348s Jun 22 13:38:44.506: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Running", Reason="", readiness=true. Elapsed: 6.139088328s Jun 22 13:38:46.510: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Running", Reason="", readiness=true. Elapsed: 8.142786205s Jun 22 13:38:48.514: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Running", Reason="", readiness=true. Elapsed: 10.147008286s Jun 22 13:38:50.518: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Running", Reason="", readiness=true. Elapsed: 12.150826824s Jun 22 13:38:52.522: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Running", Reason="", readiness=true. Elapsed: 14.155407354s Jun 22 13:38:54.526: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Running", Reason="", readiness=true. Elapsed: 16.159390176s Jun 22 13:38:56.530: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Running", Reason="", readiness=true. Elapsed: 18.16271799s Jun 22 13:38:58.534: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Running", Reason="", readiness=true. Elapsed: 20.167028577s Jun 22 13:39:00.538: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Running", Reason="", readiness=true. Elapsed: 22.171561047s Jun 22 13:39:02.543: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Running", Reason="", readiness=true. Elapsed: 24.175936621s Jun 22 13:39:04.547: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Running", Reason="", readiness=true. Elapsed: 26.180129707s Jun 22 13:39:06.553: INFO: Pod "pod-subpath-test-projected-ssr6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.186402947s STEP: Saw pod success Jun 22 13:39:06.553: INFO: Pod "pod-subpath-test-projected-ssr6" satisfied condition "success or failure" Jun 22 13:39:06.556: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-ssr6 container test-container-subpath-projected-ssr6: STEP: delete the pod Jun 22 13:39:06.595: INFO: Waiting for pod pod-subpath-test-projected-ssr6 to disappear Jun 22 13:39:06.611: INFO: Pod pod-subpath-test-projected-ssr6 no longer exists STEP: Deleting pod pod-subpath-test-projected-ssr6 Jun 22 13:39:06.611: INFO: Deleting pod "pod-subpath-test-projected-ssr6" in namespace "subpath-9233" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:39:06.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9233" for this suite. Jun 22 13:39:12.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:39:12.750: INFO: namespace subpath-9233 deletion completed in 6.133648969s • [SLOW TEST:34.632 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:39:12.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-b3e97623-bbf4-496d-abc1-592f0c5d3b18 STEP: Creating a pod to test consume configMaps Jun 22 13:39:12.927: INFO: Waiting up to 5m0s for pod "pod-configmaps-37c707ea-c4d1-4467-8b49-3bc74bfc37ed" in namespace "configmap-6438" to be "success or failure" Jun 22 13:39:13.015: INFO: Pod "pod-configmaps-37c707ea-c4d1-4467-8b49-3bc74bfc37ed": Phase="Pending", Reason="", readiness=false. Elapsed: 88.32725ms Jun 22 13:39:15.019: INFO: Pod "pod-configmaps-37c707ea-c4d1-4467-8b49-3bc74bfc37ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092269403s Jun 22 13:39:17.320: INFO: Pod "pod-configmaps-37c707ea-c4d1-4467-8b49-3bc74bfc37ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393120875s Jun 22 13:39:19.324: INFO: Pod "pod-configmaps-37c707ea-c4d1-4467-8b49-3bc74bfc37ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.396697927s STEP: Saw pod success Jun 22 13:39:19.324: INFO: Pod "pod-configmaps-37c707ea-c4d1-4467-8b49-3bc74bfc37ed" satisfied condition "success or failure" Jun 22 13:39:19.326: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-37c707ea-c4d1-4467-8b49-3bc74bfc37ed container configmap-volume-test: STEP: delete the pod Jun 22 13:39:19.376: INFO: Waiting for pod pod-configmaps-37c707ea-c4d1-4467-8b49-3bc74bfc37ed to disappear Jun 22 13:39:19.385: INFO: Pod pod-configmaps-37c707ea-c4d1-4467-8b49-3bc74bfc37ed no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:39:19.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6438" for this suite. Jun 22 13:39:25.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:39:25.588: INFO: namespace configmap-6438 deletion completed in 6.200230111s • [SLOW TEST:12.838 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:39:25.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 13:39:25.739: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:39:31.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8106" for this suite. Jun 22 13:40:11.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:40:12.005: INFO: namespace pods-8106 deletion completed in 40.099663105s • [SLOW TEST:46.416 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:40:12.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 13:40:12.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9957' Jun 22 13:40:12.210: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 13:40:12.210: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jun 22 13:40:12.251: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jun 22 13:40:12.301: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 22 13:40:12.336: INFO: scanned /root for discovery docs: Jun 22 13:40:12.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9957' Jun 22 13:40:29.381: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 22 13:40:29.381: INFO: stdout: "Created e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed\nScaling up e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jun 22 13:40:29.381: INFO: stdout: "Created e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed\nScaling up e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jun 22 13:40:29.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9957' Jun 22 13:40:29.484: INFO: stderr: "" Jun 22 13:40:29.485: INFO: stdout: "e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed-j22ck " Jun 22 13:40:29.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed-j22ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9957' Jun 22 13:40:29.571: INFO: stderr: "" Jun 22 13:40:29.571: INFO: stdout: "true" Jun 22 13:40:29.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed-j22ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9957' Jun 22 13:40:29.659: INFO: stderr: "" Jun 22 13:40:29.659: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jun 22 13:40:29.659: INFO: e2e-test-nginx-rc-8ee6ca30465ce162cc0e84487225a4ed-j22ck is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jun 22 13:40:29.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9957' Jun 22 13:40:29.835: INFO: stderr: "" Jun 22 13:40:29.835: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:40:29.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9957" for this suite. Jun 22 13:40:51.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:40:52.054: INFO: namespace kubectl-9957 deletion completed in 22.18919889s • [SLOW TEST:40.048 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:40:52.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-91abe9e0-8c66-40a4-aaf7-8132996f0848 STEP: Creating a pod to test consume secrets Jun 22 13:40:52.276: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da000c37-99e0-409a-95be-d52e5df1d275" in namespace "projected-9349" to be "success or failure" Jun 22 13:40:52.286: INFO: Pod "pod-projected-secrets-da000c37-99e0-409a-95be-d52e5df1d275": Phase="Pending", Reason="", readiness=false. Elapsed: 9.40548ms Jun 22 13:40:54.290: INFO: Pod "pod-projected-secrets-da000c37-99e0-409a-95be-d52e5df1d275": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01385887s Jun 22 13:40:56.295: INFO: Pod "pod-projected-secrets-da000c37-99e0-409a-95be-d52e5df1d275": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018361377s Jun 22 13:40:58.299: INFO: Pod "pod-projected-secrets-da000c37-99e0-409a-95be-d52e5df1d275": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022328614s STEP: Saw pod success Jun 22 13:40:58.299: INFO: Pod "pod-projected-secrets-da000c37-99e0-409a-95be-d52e5df1d275" satisfied condition "success or failure" Jun 22 13:40:58.301: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-da000c37-99e0-409a-95be-d52e5df1d275 container projected-secret-volume-test: STEP: delete the pod Jun 22 13:40:58.348: INFO: Waiting for pod pod-projected-secrets-da000c37-99e0-409a-95be-d52e5df1d275 to disappear Jun 22 13:40:58.370: INFO: Pod pod-projected-secrets-da000c37-99e0-409a-95be-d52e5df1d275 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:40:58.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9349" for this suite. Jun 22 13:41:04.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:41:04.650: INFO: namespace projected-9349 deletion completed in 6.277001545s • [SLOW TEST:12.596 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:41:04.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5619/configmap-test-7f90614c-e69e-407c-b84b-195072fb35fa STEP: Creating a pod to test consume configMaps Jun 22 13:41:04.965: INFO: Waiting up to 5m0s for pod "pod-configmaps-75d722da-1b29-426a-b79a-8f01d7a1e65e" in namespace "configmap-5619" to be "success or failure" Jun 22 13:41:04.975: INFO: Pod "pod-configmaps-75d722da-1b29-426a-b79a-8f01d7a1e65e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.062719ms Jun 22 13:41:07.068: INFO: Pod "pod-configmaps-75d722da-1b29-426a-b79a-8f01d7a1e65e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102396361s Jun 22 13:41:09.072: INFO: Pod "pod-configmaps-75d722da-1b29-426a-b79a-8f01d7a1e65e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106616256s Jun 22 13:41:11.076: INFO: Pod "pod-configmaps-75d722da-1b29-426a-b79a-8f01d7a1e65e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.110674607s STEP: Saw pod success Jun 22 13:41:11.076: INFO: Pod "pod-configmaps-75d722da-1b29-426a-b79a-8f01d7a1e65e" satisfied condition "success or failure" Jun 22 13:41:11.079: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-75d722da-1b29-426a-b79a-8f01d7a1e65e container env-test: STEP: delete the pod Jun 22 13:41:11.106: INFO: Waiting for pod pod-configmaps-75d722da-1b29-426a-b79a-8f01d7a1e65e to disappear Jun 22 13:41:11.223: INFO: Pod pod-configmaps-75d722da-1b29-426a-b79a-8f01d7a1e65e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:41:11.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5619" for this suite. Jun 22 13:41:17.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:41:17.337: INFO: namespace configmap-5619 deletion completed in 6.110609466s • [SLOW TEST:12.687 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:41:17.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:42:17.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-624" for this suite. Jun 22 13:42:39.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:42:40.003: INFO: namespace container-probe-624 deletion completed in 22.359774733s • [SLOW TEST:82.666 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:42:40.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 13:42:40.188: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03ed1d5e-6fc0-4cb0-a88b-d7716f0a12ad" in namespace "downward-api-7991" to be "success or failure" Jun 22 13:42:40.214: INFO: Pod "downwardapi-volume-03ed1d5e-6fc0-4cb0-a88b-d7716f0a12ad": Phase="Pending", Reason="", readiness=false. Elapsed: 25.427414ms Jun 22 13:42:42.262: INFO: Pod "downwardapi-volume-03ed1d5e-6fc0-4cb0-a88b-d7716f0a12ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073979396s Jun 22 13:42:44.267: INFO: Pod "downwardapi-volume-03ed1d5e-6fc0-4cb0-a88b-d7716f0a12ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079127392s Jun 22 13:42:46.437: INFO: Pod "downwardapi-volume-03ed1d5e-6fc0-4cb0-a88b-d7716f0a12ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.248961452s STEP: Saw pod success Jun 22 13:42:46.437: INFO: Pod "downwardapi-volume-03ed1d5e-6fc0-4cb0-a88b-d7716f0a12ad" satisfied condition "success or failure" Jun 22 13:42:46.440: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-03ed1d5e-6fc0-4cb0-a88b-d7716f0a12ad container client-container: STEP: delete the pod Jun 22 13:42:46.478: INFO: Waiting for pod downwardapi-volume-03ed1d5e-6fc0-4cb0-a88b-d7716f0a12ad to disappear Jun 22 13:42:46.507: INFO: Pod downwardapi-volume-03ed1d5e-6fc0-4cb0-a88b-d7716f0a12ad no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:42:46.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7991" for this suite. Jun 22 13:42:52.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:42:52.681: INFO: namespace downward-api-7991 deletion completed in 6.171579152s • [SLOW TEST:12.678 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:42:52.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 13:42:52.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85d69578-c419-4d3e-bef8-b6fc07a8d030" in namespace "projected-8427" to be "success or failure" Jun 22 13:42:53.014: INFO: Pod "downwardapi-volume-85d69578-c419-4d3e-bef8-b6fc07a8d030": Phase="Pending", Reason="", readiness=false. Elapsed: 34.84474ms Jun 22 13:42:55.018: INFO: Pod "downwardapi-volume-85d69578-c419-4d3e-bef8-b6fc07a8d030": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039165598s Jun 22 13:42:57.022: INFO: Pod "downwardapi-volume-85d69578-c419-4d3e-bef8-b6fc07a8d030": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043619308s Jun 22 13:42:59.031: INFO: Pod "downwardapi-volume-85d69578-c419-4d3e-bef8-b6fc07a8d030": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052730317s STEP: Saw pod success Jun 22 13:42:59.031: INFO: Pod "downwardapi-volume-85d69578-c419-4d3e-bef8-b6fc07a8d030" satisfied condition "success or failure" Jun 22 13:42:59.034: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-85d69578-c419-4d3e-bef8-b6fc07a8d030 container client-container: STEP: delete the pod Jun 22 13:42:59.104: INFO: Waiting for pod downwardapi-volume-85d69578-c419-4d3e-bef8-b6fc07a8d030 to disappear Jun 22 13:42:59.175: INFO: Pod downwardapi-volume-85d69578-c419-4d3e-bef8-b6fc07a8d030 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:42:59.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8427" for this suite. Jun 22 13:43:05.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:43:05.675: INFO: namespace projected-8427 deletion completed in 6.495835482s • [SLOW TEST:12.994 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:43:05.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0622 13:43:36.029699 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 13:43:36.029: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:43:36.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4169" for this suite. Jun 22 13:43:46.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:43:46.208: INFO: namespace gc-4169 deletion completed in 10.176102465s • [SLOW TEST:40.533 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:43:46.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9292.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9292.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 13:43:54.576: INFO: DNS probes using dns-9292/dns-test-d9853fc5-8fa8-4ce3-b70d-1788a3133753 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:43:54.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9292" for this suite. Jun 22 13:44:00.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:44:01.046: INFO: namespace dns-9292 deletion completed in 6.334426368s • [SLOW TEST:14.837 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:44:01.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jun 22 13:44:01.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-461 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 22 13:44:06.299: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0622 13:44:06.228563 1307 log.go:172] (0xc000962210) (0xc0008da5a0) Create stream\nI0622 13:44:06.228629 1307 log.go:172] (0xc000962210) (0xc0008da5a0) Stream added, broadcasting: 1\nI0622 13:44:06.231327 1307 log.go:172] (0xc000962210) Reply frame received for 1\nI0622 13:44:06.231391 1307 log.go:172] (0xc000962210) (0xc000a9a140) Create stream\nI0622 13:44:06.231417 1307 log.go:172] (0xc000962210) (0xc000a9a140) Stream added, broadcasting: 3\nI0622 13:44:06.232335 1307 log.go:172] (0xc000962210) Reply frame received for 3\nI0622 13:44:06.232366 1307 log.go:172] (0xc000962210) (0xc0008da640) Create stream\nI0622 13:44:06.232381 1307 log.go:172] (0xc000962210) (0xc0008da640) Stream added, broadcasting: 5\nI0622 13:44:06.233315 1307 log.go:172] (0xc000962210) Reply frame received for 5\nI0622 13:44:06.233349 1307 log.go:172] (0xc000962210) (0xc000a38000) Create stream\nI0622 13:44:06.233357 1307 log.go:172] (0xc000962210) (0xc000a38000) Stream added, broadcasting: 7\nI0622 13:44:06.234247 1307 log.go:172] (0xc000962210) Reply frame received for 7\nI0622 13:44:06.234376 1307 log.go:172] (0xc000a9a140) (3) Writing data frame\nI0622 13:44:06.234478 1307 log.go:172] (0xc000a9a140) (3) Writing data frame\nI0622 13:44:06.235266 1307 log.go:172] (0xc000962210) Data frame received for 5\nI0622 13:44:06.235282 1307 log.go:172] (0xc0008da640) (5) Data frame handling\nI0622 13:44:06.235297 1307 log.go:172] (0xc0008da640) (5) Data frame sent\nI0622 13:44:06.235927 1307 log.go:172] (0xc000962210) Data frame received for 5\nI0622 13:44:06.235937 1307 log.go:172] (0xc0008da640) (5) Data frame handling\nI0622 13:44:06.235943 1307 log.go:172] (0xc0008da640) (5) Data frame sent\nI0622 13:44:06.274620 1307 log.go:172] (0xc000962210) Data frame received for 5\nI0622 13:44:06.274661 1307 log.go:172] (0xc0008da640) (5) Data frame handling\nI0622 13:44:06.274715 1307 log.go:172] (0xc000962210) Data frame received for 7\nI0622 13:44:06.274747 1307 log.go:172] (0xc000a38000) (7) Data frame handling\nI0622 13:44:06.275099 1307 log.go:172] (0xc000962210) Data frame received for 1\nI0622 13:44:06.275137 1307 log.go:172] (0xc0008da5a0) (1) Data frame handling\nI0622 13:44:06.275158 1307 log.go:172] (0xc0008da5a0) (1) Data frame sent\nI0622 13:44:06.275188 1307 log.go:172] (0xc000962210) (0xc0008da5a0) Stream removed, broadcasting: 1\nI0622 13:44:06.275299 1307 log.go:172] (0xc000962210) (0xc0008da5a0) Stream removed, broadcasting: 1\nI0622 13:44:06.275345 1307 log.go:172] (0xc000962210) (0xc000a9a140) Stream removed, broadcasting: 3\nI0622 13:44:06.275370 1307 log.go:172] (0xc000962210) (0xc0008da640) Stream removed, broadcasting: 5\nI0622 13:44:06.275397 1307 log.go:172] (0xc000962210) (0xc000a38000) Stream removed, broadcasting: 7\nI0622 13:44:06.275594 1307 log.go:172] (0xc000962210) (0xc000a9a140) Stream removed, broadcasting: 3\nI0622 13:44:06.275826 1307 log.go:172] (0xc000962210) Go away received\n" Jun 22 13:44:06.299: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:44:08.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-461" for this suite. Jun 22 13:44:14.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:44:14.522: INFO: namespace kubectl-461 deletion completed in 6.199472465s • [SLOW TEST:13.476 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:44:14.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 22 13:44:14.658: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:44:24.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8262" for this suite. Jun 22 13:44:30.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:44:30.207: INFO: namespace init-container-8262 deletion completed in 6.108047465s • [SLOW TEST:15.683 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:44:30.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-3a7c9d8d-0c4b-4b4c-9fa8-1d993fd4b0d5 STEP: Creating a pod to test consume configMaps Jun 22 13:44:30.562: INFO: Waiting up to 5m0s for pod "pod-configmaps-79d77d86-811d-42f0-b0e6-ef6707714d54" in namespace "configmap-8291" to be "success or failure" Jun 22 13:44:30.622: INFO: Pod "pod-configmaps-79d77d86-811d-42f0-b0e6-ef6707714d54": Phase="Pending", Reason="", readiness=false. Elapsed: 59.350281ms Jun 22 13:44:32.626: INFO: Pod "pod-configmaps-79d77d86-811d-42f0-b0e6-ef6707714d54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06367681s Jun 22 13:44:34.630: INFO: Pod "pod-configmaps-79d77d86-811d-42f0-b0e6-ef6707714d54": Phase="Running", Reason="", readiness=true. Elapsed: 4.06790654s Jun 22 13:44:36.635: INFO: Pod "pod-configmaps-79d77d86-811d-42f0-b0e6-ef6707714d54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072286424s STEP: Saw pod success Jun 22 13:44:36.635: INFO: Pod "pod-configmaps-79d77d86-811d-42f0-b0e6-ef6707714d54" satisfied condition "success or failure" Jun 22 13:44:36.637: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-79d77d86-811d-42f0-b0e6-ef6707714d54 container configmap-volume-test: STEP: delete the pod Jun 22 13:44:36.692: INFO: Waiting for pod pod-configmaps-79d77d86-811d-42f0-b0e6-ef6707714d54 to disappear Jun 22 13:44:36.779: INFO: Pod pod-configmaps-79d77d86-811d-42f0-b0e6-ef6707714d54 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:44:36.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8291" for this suite. Jun 22 13:44:42.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:44:42.924: INFO: namespace configmap-8291 deletion completed in 6.141372718s • [SLOW TEST:12.718 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:44:42.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 13:44:43.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-adbe1361-6819-4ed0-9451-8b3cd6651e9b" in namespace "projected-5293" to be "success or failure" Jun 22 13:44:43.060: INFO: Pod "downwardapi-volume-adbe1361-6819-4ed0-9451-8b3cd6651e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.526475ms Jun 22 13:44:45.064: INFO: Pod "downwardapi-volume-adbe1361-6819-4ed0-9451-8b3cd6651e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009019341s Jun 22 13:44:47.068: INFO: Pod "downwardapi-volume-adbe1361-6819-4ed0-9451-8b3cd6651e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013474293s Jun 22 13:44:49.073: INFO: Pod "downwardapi-volume-adbe1361-6819-4ed0-9451-8b3cd6651e9b": Phase="Running", Reason="", readiness=true. Elapsed: 6.018337738s Jun 22 13:44:51.078: INFO: Pod "downwardapi-volume-adbe1361-6819-4ed0-9451-8b3cd6651e9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.022757254s STEP: Saw pod success Jun 22 13:44:51.078: INFO: Pod "downwardapi-volume-adbe1361-6819-4ed0-9451-8b3cd6651e9b" satisfied condition "success or failure" Jun 22 13:44:51.081: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-adbe1361-6819-4ed0-9451-8b3cd6651e9b container client-container: STEP: delete the pod Jun 22 13:44:51.125: INFO: Waiting for pod downwardapi-volume-adbe1361-6819-4ed0-9451-8b3cd6651e9b to disappear Jun 22 13:44:51.181: INFO: Pod downwardapi-volume-adbe1361-6819-4ed0-9451-8b3cd6651e9b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:44:51.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5293" for this suite. Jun 22 13:44:57.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:44:57.340: INFO: namespace projected-5293 deletion completed in 6.155384363s • [SLOW TEST:14.415 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:44:57.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jun 22 13:45:03.616: INFO: Pod pod-hostip-3fca65e9-dc49-4eb3-9e1a-0923bdfc57c5 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:45:03.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4899" for this suite. Jun 22 13:45:25.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:45:25.730: INFO: namespace pods-4899 deletion completed in 22.109577304s • [SLOW TEST:28.389 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:45:25.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9045 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 22 13:45:26.031: INFO: Found 0 stateful pods, waiting for 3 Jun 22 13:45:36.036: INFO: Found 2 stateful pods, waiting for 3 Jun 22 13:45:46.037: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 13:45:46.037: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 13:45:46.037: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 22 13:45:46.063: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 22 13:45:56.208: INFO: Updating stateful set ss2 Jun 22 13:45:56.293: INFO: Waiting for Pod statefulset-9045/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 22 13:46:06.300: INFO: Waiting for Pod statefulset-9045/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jun 22 13:46:19.428: INFO: Found 2 stateful pods, waiting for 3 Jun 22 13:46:29.434: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 13:46:29.434: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 13:46:29.434: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 22 13:46:39.477: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 13:46:39.477: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 13:46:39.477: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 22 13:46:39.499: INFO: Updating stateful set ss2 Jun 22 13:46:39.521: INFO: Waiting for Pod statefulset-9045/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 22 13:46:49.546: INFO: Waiting for Pod statefulset-9045/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 22 13:46:59.563: INFO: Updating stateful set ss2 Jun 22 13:46:59.612: INFO: Waiting for StatefulSet statefulset-9045/ss2 to complete update Jun 22 13:46:59.612: INFO: Waiting for Pod statefulset-9045/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 22 13:47:09.619: INFO: Waiting for StatefulSet statefulset-9045/ss2 to complete update Jun 22 13:47:09.619: INFO: Waiting for Pod statefulset-9045/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 22 13:47:19.620: INFO: Deleting all statefulset in ns statefulset-9045 Jun 22 13:47:19.623: INFO: Scaling statefulset ss2 to 0 Jun 22 13:47:49.675: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 13:47:49.678: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:47:49.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9045" for this suite. Jun 22 13:47:57.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:47:58.081: INFO: namespace statefulset-9045 deletion completed in 8.322550294s • [SLOW TEST:152.350 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:47:58.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-af732df6-d073-4f2e-ba6f-a424978ee54f STEP: Creating a pod to test consume configMaps Jun 22 13:47:58.269: INFO: Waiting up to 5m0s for pod "pod-configmaps-3d64c7a0-eef4-4126-88e5-478240555c7d" in namespace "configmap-3584" to be "success or failure" Jun 22 13:47:58.272: INFO: Pod "pod-configmaps-3d64c7a0-eef4-4126-88e5-478240555c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.188496ms Jun 22 13:48:00.276: INFO: Pod "pod-configmaps-3d64c7a0-eef4-4126-88e5-478240555c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00720587s Jun 22 13:48:02.286: INFO: Pod "pod-configmaps-3d64c7a0-eef4-4126-88e5-478240555c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01695049s Jun 22 13:48:04.290: INFO: Pod "pod-configmaps-3d64c7a0-eef4-4126-88e5-478240555c7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021055174s STEP: Saw pod success Jun 22 13:48:04.290: INFO: Pod "pod-configmaps-3d64c7a0-eef4-4126-88e5-478240555c7d" satisfied condition "success or failure" Jun 22 13:48:04.294: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-3d64c7a0-eef4-4126-88e5-478240555c7d container configmap-volume-test: STEP: delete the pod Jun 22 13:48:04.366: INFO: Waiting for pod pod-configmaps-3d64c7a0-eef4-4126-88e5-478240555c7d to disappear Jun 22 13:48:04.374: INFO: Pod pod-configmaps-3d64c7a0-eef4-4126-88e5-478240555c7d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:48:04.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3584" for this suite. Jun 22 13:48:10.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:48:10.575: INFO: namespace configmap-3584 deletion completed in 6.198784579s • [SLOW TEST:12.493 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:48:10.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 13:48:10.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76f54137-9ba6-4703-bf2e-c3244684ce98" in namespace "projected-3509" to be "success or failure" Jun 22 13:48:10.745: INFO: Pod "downwardapi-volume-76f54137-9ba6-4703-bf2e-c3244684ce98": Phase="Pending", Reason="", readiness=false. Elapsed: 16.234064ms Jun 22 13:48:12.750: INFO: Pod "downwardapi-volume-76f54137-9ba6-4703-bf2e-c3244684ce98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020783887s Jun 22 13:48:14.922: INFO: Pod "downwardapi-volume-76f54137-9ba6-4703-bf2e-c3244684ce98": Phase="Running", Reason="", readiness=true. Elapsed: 4.192394715s Jun 22 13:48:16.925: INFO: Pod "downwardapi-volume-76f54137-9ba6-4703-bf2e-c3244684ce98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.196150055s STEP: Saw pod success Jun 22 13:48:16.925: INFO: Pod "downwardapi-volume-76f54137-9ba6-4703-bf2e-c3244684ce98" satisfied condition "success or failure" Jun 22 13:48:16.927: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-76f54137-9ba6-4703-bf2e-c3244684ce98 container client-container: STEP: delete the pod Jun 22 13:48:16.955: INFO: Waiting for pod downwardapi-volume-76f54137-9ba6-4703-bf2e-c3244684ce98 to disappear Jun 22 13:48:16.979: INFO: Pod downwardapi-volume-76f54137-9ba6-4703-bf2e-c3244684ce98 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:48:16.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3509" for this suite. Jun 22 13:48:23.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:48:23.165: INFO: namespace projected-3509 deletion completed in 6.182554816s • [SLOW TEST:12.589 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:48:23.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-665x STEP: Creating a pod to test atomic-volume-subpath Jun 22 13:48:23.436: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-665x" in namespace "subpath-7825" to be "success or failure" Jun 22 13:48:23.446: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.721879ms Jun 22 13:48:25.451: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015413653s Jun 22 13:48:27.455: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019556166s Jun 22 13:48:29.459: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Running", Reason="", readiness=true. Elapsed: 6.023633327s Jun 22 13:48:31.464: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Running", Reason="", readiness=true. Elapsed: 8.028088142s Jun 22 13:48:33.468: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Running", Reason="", readiness=true. Elapsed: 10.032270765s Jun 22 13:48:35.473: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Running", Reason="", readiness=true. Elapsed: 12.037233328s Jun 22 13:48:37.475: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Running", Reason="", readiness=true. Elapsed: 14.03959879s Jun 22 13:48:39.479: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Running", Reason="", readiness=true. Elapsed: 16.043114256s Jun 22 13:48:41.482: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Running", Reason="", readiness=true. Elapsed: 18.046727086s Jun 22 13:48:43.486: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Running", Reason="", readiness=true. Elapsed: 20.050772319s Jun 22 13:48:45.490: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Running", Reason="", readiness=true. Elapsed: 22.054727072s Jun 22 13:48:47.494: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Running", Reason="", readiness=true. Elapsed: 24.058164558s Jun 22 13:48:49.498: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Running", Reason="", readiness=true. Elapsed: 26.062622381s Jun 22 13:48:51.502: INFO: Pod "pod-subpath-test-configmap-665x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.066479724s STEP: Saw pod success Jun 22 13:48:51.502: INFO: Pod "pod-subpath-test-configmap-665x" satisfied condition "success or failure" Jun 22 13:48:51.505: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-665x container test-container-subpath-configmap-665x: STEP: delete the pod Jun 22 13:48:51.551: INFO: Waiting for pod pod-subpath-test-configmap-665x to disappear Jun 22 13:48:51.574: INFO: Pod pod-subpath-test-configmap-665x no longer exists STEP: Deleting pod pod-subpath-test-configmap-665x Jun 22 13:48:51.574: INFO: Deleting pod "pod-subpath-test-configmap-665x" in namespace "subpath-7825" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:48:51.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7825" for this suite. Jun 22 13:48:57.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:48:57.737: INFO: namespace subpath-7825 deletion completed in 6.158223679s • [SLOW TEST:34.572 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:48:57.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 13:48:57.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 22 13:48:58.084: INFO: stderr: "" Jun 22 13:48:58.084: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T12:08:14Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:48:58.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2450" for this suite. Jun 22 13:49:04.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:49:04.248: INFO: namespace kubectl-2450 deletion completed in 6.159968844s • [SLOW TEST:6.510 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:49:04.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 13:49:04.510: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:49:10.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6704" for this suite. Jun 22 13:49:50.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:49:50.741: INFO: namespace pods-6704 deletion completed in 40.149986603s • [SLOW TEST:46.493 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:49:50.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 13:49:50.885: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 22 13:49:50.939: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 22 13:49:55.944: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 22 13:49:55.944: INFO: Creating deployment "test-rolling-update-deployment" Jun 22 13:49:55.948: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 22 13:49:55.978: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 22 13:49:58.162: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 22 13:49:58.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728430596, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728430596, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728430596, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728430595, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 13:50:00.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728430596, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728430596, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728430596, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728430595, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 13:50:02.168: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 22 13:50:02.177: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-53,SelfLink:/apis/apps/v1/namespaces/deployment-53/deployments/test-rolling-update-deployment,UID:97d468fe-c65d-4533-bfec-9322b6f8ceec,ResourceVersion:17862150,Generation:1,CreationTimestamp:2020-06-22 13:49:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-22 13:49:56 +0000 UTC 2020-06-22 13:49:56 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-22 13:50:01 +0000 UTC 2020-06-22 13:49:55 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 22 13:50:02.181: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-53,SelfLink:/apis/apps/v1/namespaces/deployment-53/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:874efe01-b225-46b3-91f9-8e89beac986b,ResourceVersion:17862139,Generation:1,CreationTimestamp:2020-06-22 13:49:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 97d468fe-c65d-4533-bfec-9322b6f8ceec 0xc002aa6c77 0xc002aa6c78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 22 13:50:02.181: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 22 13:50:02.181: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-53,SelfLink:/apis/apps/v1/namespaces/deployment-53/replicasets/test-rolling-update-controller,UID:c0771c43-9801-476e-8678-8125618f786d,ResourceVersion:17862148,Generation:2,CreationTimestamp:2020-06-22 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 97d468fe-c65d-4533-bfec-9322b6f8ceec 0xc002aa6b67 0xc002aa6b68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 13:50:02.184: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-ln7c6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-ln7c6,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-53,SelfLink:/api/v1/namespaces/deployment-53/pods/test-rolling-update-deployment-79f6b9d75c-ln7c6,UID:e4e43ab3-891e-4a12-9253-dd47e5950cd5,ResourceVersion:17862138,Generation:0,CreationTimestamp:2020-06-22 13:49:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 874efe01-b225-46b3-91f9-8e89beac986b 0xc002aa7567 0xc002aa7568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6tx5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6tx5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-6tx5x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002aa75e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002aa7600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:49:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:50:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:50:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 13:49:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.96,StartTime:2020-06-22 13:49:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-22 13:50:00 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://f84f5236d5c6f1e4b210865c8cb56e17ee9d21f9f2351b2ab2d4ef60028bddbc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:50:02.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-53" for this suite. Jun 22 13:50:10.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:50:10.403: INFO: namespace deployment-53 deletion completed in 8.214810327s • [SLOW TEST:19.661 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:50:10.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 13:50:10.632: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4274d9cf-ebc4-4210-b67a-54dff5c5a633" in namespace "projected-9333" to be "success or failure" Jun 22 13:50:10.671: INFO: Pod "downwardapi-volume-4274d9cf-ebc4-4210-b67a-54dff5c5a633": Phase="Pending", Reason="", readiness=false. Elapsed: 38.94376ms Jun 22 13:50:12.676: INFO: Pod "downwardapi-volume-4274d9cf-ebc4-4210-b67a-54dff5c5a633": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043989957s Jun 22 13:50:14.762: INFO: Pod "downwardapi-volume-4274d9cf-ebc4-4210-b67a-54dff5c5a633": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129945382s Jun 22 13:50:16.766: INFO: Pod "downwardapi-volume-4274d9cf-ebc4-4210-b67a-54dff5c5a633": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133857798s STEP: Saw pod success Jun 22 13:50:16.766: INFO: Pod "downwardapi-volume-4274d9cf-ebc4-4210-b67a-54dff5c5a633" satisfied condition "success or failure" Jun 22 13:50:16.768: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4274d9cf-ebc4-4210-b67a-54dff5c5a633 container client-container: STEP: delete the pod Jun 22 13:50:16.810: INFO: Waiting for pod downwardapi-volume-4274d9cf-ebc4-4210-b67a-54dff5c5a633 to disappear Jun 22 13:50:16.850: INFO: Pod downwardapi-volume-4274d9cf-ebc4-4210-b67a-54dff5c5a633 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:50:16.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9333" for this suite. Jun 22 13:50:22.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:50:23.037: INFO: namespace projected-9333 deletion completed in 6.18302554s • [SLOW TEST:12.634 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:50:23.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 13:50:23.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8426' Jun 22 13:50:26.232: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 13:50:26.232: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jun 22 13:50:28.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8426' Jun 22 13:50:28.491: INFO: stderr: "" Jun 22 13:50:28.491: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:50:28.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8426" for this suite. Jun 22 13:50:50.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:50:50.755: INFO: namespace kubectl-8426 deletion completed in 22.259929555s • [SLOW TEST:27.717 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:50:50.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-ghhj4 in namespace proxy-8721 I0622 13:50:51.058804 7 runners.go:180] Created replication controller with name: proxy-service-ghhj4, namespace: proxy-8721, replica count: 1 I0622 13:50:52.109326 7 runners.go:180] proxy-service-ghhj4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 13:50:53.109710 7 runners.go:180] proxy-service-ghhj4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 13:50:54.109911 7 runners.go:180] proxy-service-ghhj4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 13:50:55.110153 7 runners.go:180] proxy-service-ghhj4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0622 13:50:56.110368 7 runners.go:180] proxy-service-ghhj4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 13:50:57.110643 7 runners.go:180] proxy-service-ghhj4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 13:50:58.110788 7 runners.go:180] proxy-service-ghhj4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 13:50:59.110978 7 runners.go:180] proxy-service-ghhj4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0622 13:51:00.111191 7 runners.go:180] proxy-service-ghhj4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 22 13:51:00.114: INFO: setup took 9.202675629s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 22 13:51:00.119: INFO: (0) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.399071ms) Jun 22 13:51:00.121: INFO: (0) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 6.314895ms) Jun 22 13:51:00.122: INFO: (0) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 7.765597ms) Jun 22 13:51:00.122: INFO: (0) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 7.773875ms) Jun 22 13:51:00.123: INFO: (0) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 7.810606ms) Jun 22 13:51:00.123: INFO: (0) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 7.970344ms) Jun 22 13:51:00.123: INFO: (0) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 8.014819ms) Jun 22 13:51:00.123: INFO: (0) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 8.559019ms) Jun 22 13:51:00.123: INFO: (0) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 8.486445ms) Jun 22 13:51:00.123: INFO: (0) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 8.548991ms) Jun 22 13:51:00.123: INFO: (0) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 8.572213ms) Jun 22 13:51:00.153: INFO: (0) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 38.937739ms) Jun 22 13:51:00.153: INFO: (0) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 38.904202ms) Jun 22 13:51:00.153: INFO: (0) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 38.844897ms) Jun 22 13:51:00.154: INFO: (0) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test<... (200; 4.081836ms) Jun 22 13:51:00.158: INFO: (1) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 4.237495ms) Jun 22 13:51:00.159: INFO: (1) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 5.602765ms) Jun 22 13:51:00.159: INFO: (1) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 5.613171ms) Jun 22 13:51:00.159: INFO: (1) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 5.675145ms) Jun 22 13:51:00.159: INFO: (1) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 5.715647ms) Jun 22 13:51:00.159: INFO: (1) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: ... (200; 5.71044ms) Jun 22 13:51:00.159: INFO: (1) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 5.816952ms) Jun 22 13:51:00.160: INFO: (1) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 6.049912ms) Jun 22 13:51:00.160: INFO: (1) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 5.886961ms) Jun 22 13:51:00.160: INFO: (1) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 5.944192ms) Jun 22 13:51:00.160: INFO: (1) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 6.059539ms) Jun 22 13:51:00.161: INFO: (1) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 7.700471ms) Jun 22 13:51:00.166: INFO: (2) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test (200; 4.504427ms) Jun 22 13:51:00.166: INFO: (2) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 4.689903ms) Jun 22 13:51:00.166: INFO: (2) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.722095ms) Jun 22 13:51:00.167: INFO: (2) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 4.853084ms) Jun 22 13:51:00.167: INFO: (2) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 5.15313ms) Jun 22 13:51:00.167: INFO: (2) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 5.427297ms) Jun 22 13:51:00.167: INFO: (2) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 5.462848ms) Jun 22 13:51:00.167: INFO: (2) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 5.74921ms) Jun 22 13:51:00.169: INFO: (2) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 6.918878ms) Jun 22 13:51:00.169: INFO: (2) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 7.106653ms) Jun 22 13:51:00.169: INFO: (2) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 7.099294ms) Jun 22 13:51:00.169: INFO: (2) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 7.237387ms) Jun 22 13:51:00.169: INFO: (2) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 7.382683ms) Jun 22 13:51:00.169: INFO: (2) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 7.431082ms) Jun 22 13:51:00.173: INFO: (3) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.316642ms) Jun 22 13:51:00.173: INFO: (3) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 3.294847ms) Jun 22 13:51:00.173: INFO: (3) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test (200; 3.973861ms) Jun 22 13:51:00.173: INFO: (3) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 4.133144ms) Jun 22 13:51:00.174: INFO: (3) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 5.235762ms) Jun 22 13:51:00.174: INFO: (3) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 5.172906ms) Jun 22 13:51:00.174: INFO: (3) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 5.191969ms) Jun 22 13:51:00.174: INFO: (3) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 5.280141ms) Jun 22 13:51:00.175: INFO: (3) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 5.373174ms) Jun 22 13:51:00.175: INFO: (3) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 5.560195ms) Jun 22 13:51:00.178: INFO: (4) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 3.416803ms) Jun 22 13:51:00.179: INFO: (4) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.812871ms) Jun 22 13:51:00.179: INFO: (4) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 4.07202ms) Jun 22 13:51:00.179: INFO: (4) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 4.399546ms) Jun 22 13:51:00.179: INFO: (4) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 4.484231ms) Jun 22 13:51:00.179: INFO: (4) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 4.451389ms) Jun 22 13:51:00.179: INFO: (4) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.487009ms) Jun 22 13:51:00.179: INFO: (4) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 4.649357ms) Jun 22 13:51:00.179: INFO: (4) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 4.783493ms) Jun 22 13:51:00.180: INFO: (4) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 4.870712ms) Jun 22 13:51:00.180: INFO: (4) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 4.963343ms) Jun 22 13:51:00.180: INFO: (4) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 4.948317ms) Jun 22 13:51:00.180: INFO: (4) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: ... (200; 2.633785ms) Jun 22 13:51:00.184: INFO: (5) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 2.509106ms) Jun 22 13:51:00.184: INFO: (5) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 2.940411ms) Jun 22 13:51:00.184: INFO: (5) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 2.927388ms) Jun 22 13:51:00.184: INFO: (5) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.938493ms) Jun 22 13:51:00.184: INFO: (5) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 2.687007ms) Jun 22 13:51:00.184: INFO: (5) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 2.811722ms) Jun 22 13:51:00.185: INFO: (5) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 5.143215ms) Jun 22 13:51:00.185: INFO: (5) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 4.170754ms) Jun 22 13:51:00.185: INFO: (5) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 4.079632ms) Jun 22 13:51:00.185: INFO: (5) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 4.72817ms) Jun 22 13:51:00.185: INFO: (5) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 5.05609ms) Jun 22 13:51:00.185: INFO: (5) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 5.010214ms) Jun 22 13:51:00.189: INFO: (6) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.384919ms) Jun 22 13:51:00.189: INFO: (6) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 3.410778ms) Jun 22 13:51:00.189: INFO: (6) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.396436ms) Jun 22 13:51:00.189: INFO: (6) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 3.602968ms) Jun 22 13:51:00.190: INFO: (6) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 3.923419ms) Jun 22 13:51:00.190: INFO: (6) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.922276ms) Jun 22 13:51:00.190: INFO: (6) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test<... (200; 4.094708ms) Jun 22 13:51:00.190: INFO: (6) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 4.221449ms) Jun 22 13:51:00.190: INFO: (6) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.285806ms) Jun 22 13:51:00.190: INFO: (6) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 4.367407ms) Jun 22 13:51:00.190: INFO: (6) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 4.299116ms) Jun 22 13:51:00.191: INFO: (6) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 5.322773ms) Jun 22 13:51:00.191: INFO: (6) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 5.468898ms) Jun 22 13:51:00.191: INFO: (6) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 5.517937ms) Jun 22 13:51:00.191: INFO: (6) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 5.487041ms) Jun 22 13:51:00.195: INFO: (7) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.467821ms) Jun 22 13:51:00.195: INFO: (7) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 4.236695ms) Jun 22 13:51:00.196: INFO: (7) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 4.259307ms) Jun 22 13:51:00.196: INFO: (7) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 4.396998ms) Jun 22 13:51:00.196: INFO: (7) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.414101ms) Jun 22 13:51:00.196: INFO: (7) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 4.462706ms) Jun 22 13:51:00.196: INFO: (7) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 4.442871ms) Jun 22 13:51:00.196: INFO: (7) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test (200; 4.904697ms) Jun 22 13:51:00.196: INFO: (7) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 4.99203ms) Jun 22 13:51:00.196: INFO: (7) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 5.066928ms) Jun 22 13:51:00.196: INFO: (7) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 5.129718ms) Jun 22 13:51:00.196: INFO: (7) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 5.123109ms) Jun 22 13:51:00.196: INFO: (7) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 5.197805ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 4.821837ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 4.833564ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 5.005877ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 5.561678ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 5.552558ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 5.4058ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 5.656158ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 5.583785ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 5.456998ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 5.449881ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 5.624199ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 5.826337ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: ... (200; 5.851193ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 5.844937ms) Jun 22 13:51:00.202: INFO: (8) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 5.909596ms) Jun 22 13:51:00.206: INFO: (9) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 3.46096ms) Jun 22 13:51:00.206: INFO: (9) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 3.543436ms) Jun 22 13:51:00.206: INFO: (9) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.539249ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.955928ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.208698ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 4.236909ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 4.534414ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 4.583972ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 4.762115ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 4.787652ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 4.803339ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 4.787005ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 4.853479ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 4.864398ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 4.831451ms) Jun 22 13:51:00.207: INFO: (9) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test (200; 2.981708ms) Jun 22 13:51:00.211: INFO: (10) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.153375ms) Jun 22 13:51:00.211: INFO: (10) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.160284ms) Jun 22 13:51:00.211: INFO: (10) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 3.107682ms) Jun 22 13:51:00.211: INFO: (10) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 3.186671ms) Jun 22 13:51:00.211: INFO: (10) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.207117ms) Jun 22 13:51:00.211: INFO: (10) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 3.32811ms) Jun 22 13:51:00.211: INFO: (10) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 3.267321ms) Jun 22 13:51:00.211: INFO: (10) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: ... (200; 2.977396ms) Jun 22 13:51:00.221: INFO: (11) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test (200; 5.151521ms) Jun 22 13:51:00.223: INFO: (11) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 5.165196ms) Jun 22 13:51:00.223: INFO: (11) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 5.250531ms) Jun 22 13:51:00.223: INFO: (11) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 5.443269ms) Jun 22 13:51:00.223: INFO: (11) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 5.499204ms) Jun 22 13:51:00.223: INFO: (11) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 5.458989ms) Jun 22 13:51:00.223: INFO: (11) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 5.49904ms) Jun 22 13:51:00.223: INFO: (11) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 5.575292ms) Jun 22 13:51:00.223: INFO: (11) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 5.634452ms) Jun 22 13:51:00.226: INFO: (12) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 2.670668ms) Jun 22 13:51:00.226: INFO: (12) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 2.652228ms) Jun 22 13:51:00.226: INFO: (12) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.011938ms) Jun 22 13:51:00.227: INFO: (12) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 3.248702ms) Jun 22 13:51:00.227: INFO: (12) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.012061ms) Jun 22 13:51:00.227: INFO: (12) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test<... (200; 4.381113ms) Jun 22 13:51:00.228: INFO: (12) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 4.557904ms) Jun 22 13:51:00.228: INFO: (12) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 4.557691ms) Jun 22 13:51:00.228: INFO: (12) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 4.54453ms) Jun 22 13:51:00.228: INFO: (12) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 4.51426ms) Jun 22 13:51:00.228: INFO: (12) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 4.599668ms) Jun 22 13:51:00.228: INFO: (12) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 4.678755ms) Jun 22 13:51:00.228: INFO: (12) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.725902ms) Jun 22 13:51:00.228: INFO: (12) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 5.074789ms) Jun 22 13:51:00.232: INFO: (13) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.423817ms) Jun 22 13:51:00.232: INFO: (13) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.697614ms) Jun 22 13:51:00.232: INFO: (13) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 3.65356ms) Jun 22 13:51:00.233: INFO: (13) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 3.999142ms) Jun 22 13:51:00.233: INFO: (13) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 3.97304ms) Jun 22 13:51:00.233: INFO: (13) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 4.1034ms) Jun 22 13:51:00.233: INFO: (13) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.102916ms) Jun 22 13:51:00.233: INFO: (13) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test (200; 4.174637ms) Jun 22 13:51:00.233: INFO: (13) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 4.19202ms) Jun 22 13:51:00.233: INFO: (13) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 4.748259ms) Jun 22 13:51:00.234: INFO: (13) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 5.085505ms) Jun 22 13:51:00.234: INFO: (13) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 5.044819ms) Jun 22 13:51:00.234: INFO: (13) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 5.134142ms) Jun 22 13:51:00.234: INFO: (13) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 5.126501ms) Jun 22 13:51:00.234: INFO: (13) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 5.114911ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.728895ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 3.821751ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 3.808099ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 3.919572ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 3.948214ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 4.047031ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 4.074425ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 4.154238ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 4.077741ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 4.35526ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 4.487966ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 4.437339ms) Jun 22 13:51:00.238: INFO: (14) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.635405ms) Jun 22 13:51:00.239: INFO: (14) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: ... (200; 4.681819ms) Jun 22 13:51:00.239: INFO: (14) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 4.634123ms) Jun 22 13:51:00.241: INFO: (15) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 2.808746ms) Jun 22 13:51:00.242: INFO: (15) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 2.999716ms) Jun 22 13:51:00.242: INFO: (15) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test (200; 3.301523ms) Jun 22 13:51:00.242: INFO: (15) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.242839ms) Jun 22 13:51:00.242: INFO: (15) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.338099ms) Jun 22 13:51:00.242: INFO: (15) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.361636ms) Jun 22 13:51:00.243: INFO: (15) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 4.135939ms) Jun 22 13:51:00.243: INFO: (15) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 4.109615ms) Jun 22 13:51:00.243: INFO: (15) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 4.563902ms) Jun 22 13:51:00.243: INFO: (15) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 4.523317ms) Jun 22 13:51:00.243: INFO: (15) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 4.605941ms) Jun 22 13:51:00.243: INFO: (15) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 4.702371ms) Jun 22 13:51:00.243: INFO: (15) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 4.751092ms) Jun 22 13:51:00.245: INFO: (16) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 1.974535ms) Jun 22 13:51:00.247: INFO: (16) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.494309ms) Jun 22 13:51:00.247: INFO: (16) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 3.610184ms) Jun 22 13:51:00.248: INFO: (16) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 4.046704ms) Jun 22 13:51:00.248: INFO: (16) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.117764ms) Jun 22 13:51:00.248: INFO: (16) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 4.124196ms) Jun 22 13:51:00.249: INFO: (16) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 5.937718ms) Jun 22 13:51:00.249: INFO: (16) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: ... (200; 5.897553ms) Jun 22 13:51:00.249: INFO: (16) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 5.890214ms) Jun 22 13:51:00.251: INFO: (16) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 7.070508ms) Jun 22 13:51:00.251: INFO: (16) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 7.110586ms) Jun 22 13:51:00.251: INFO: (16) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 7.31268ms) Jun 22 13:51:00.251: INFO: (16) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 7.301704ms) Jun 22 13:51:00.251: INFO: (16) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 7.344828ms) Jun 22 13:51:00.254: INFO: (17) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.427168ms) Jun 22 13:51:00.256: INFO: (17) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 5.188454ms) Jun 22 13:51:00.256: INFO: (17) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 5.380516ms) Jun 22 13:51:00.256: INFO: (17) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 5.403637ms) Jun 22 13:51:00.256: INFO: (17) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 5.376785ms) Jun 22 13:51:00.256: INFO: (17) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 5.484833ms) Jun 22 13:51:00.256: INFO: (17) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 5.457756ms) Jun 22 13:51:00.256: INFO: (17) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 5.442854ms) Jun 22 13:51:00.256: INFO: (17) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 5.432488ms) Jun 22 13:51:00.256: INFO: (17) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 5.525667ms) Jun 22 13:51:00.256: INFO: (17) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test (200; 5.659352ms) Jun 22 13:51:00.257: INFO: (17) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 5.716941ms) Jun 22 13:51:00.257: INFO: (17) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 5.721616ms) Jun 22 13:51:00.259: INFO: (18) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 2.150485ms) Jun 22 13:51:00.259: INFO: (18) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 2.597524ms) Jun 22 13:51:00.260: INFO: (18) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 3.23028ms) Jun 22 13:51:00.260: INFO: (18) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.606068ms) Jun 22 13:51:00.260: INFO: (18) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 3.670709ms) Jun 22 13:51:00.260: INFO: (18) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 3.7504ms) Jun 22 13:51:00.261: INFO: (18) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 3.758446ms) Jun 22 13:51:00.261: INFO: (18) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 3.775413ms) Jun 22 13:51:00.261: INFO: (18) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:1080/proxy/: test<... (200; 3.751251ms) Jun 22 13:51:00.261: INFO: (18) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 4.310635ms) Jun 22 13:51:00.261: INFO: (18) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 4.313516ms) Jun 22 13:51:00.261: INFO: (18) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 4.299173ms) Jun 22 13:51:00.261: INFO: (18) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: ... (200; 4.312502ms) Jun 22 13:51:00.261: INFO: (18) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 4.326074ms) Jun 22 13:51:00.263: INFO: (19) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 2.046708ms) Jun 22 13:51:00.264: INFO: (19) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 2.861313ms) Jun 22 13:51:00.264: INFO: (19) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh/proxy/: test (200; 2.813534ms) Jun 22 13:51:00.265: INFO: (19) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname2/proxy/: tls qux (200; 3.506834ms) Jun 22 13:51:00.265: INFO: (19) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:443/proxy/: test<... (200; 3.337785ms) Jun 22 13:51:00.265: INFO: (19) /api/v1/namespaces/proxy-8721/pods/proxy-service-ghhj4-lcmrh:160/proxy/: foo (200; 3.751981ms) Jun 22 13:51:00.266: INFO: (19) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:460/proxy/: tls baz (200; 3.926572ms) Jun 22 13:51:00.266: INFO: (19) /api/v1/namespaces/proxy-8721/services/https:proxy-service-ghhj4:tlsportname1/proxy/: tls baz (200; 4.040009ms) Jun 22 13:51:00.266: INFO: (19) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:162/proxy/: bar (200; 4.116174ms) Jun 22 13:51:00.266: INFO: (19) /api/v1/namespaces/proxy-8721/pods/https:proxy-service-ghhj4-lcmrh:462/proxy/: tls qux (200; 4.486858ms) Jun 22 13:51:00.266: INFO: (19) /api/v1/namespaces/proxy-8721/pods/http:proxy-service-ghhj4-lcmrh:1080/proxy/: ... (200; 4.242139ms) Jun 22 13:51:00.267: INFO: (19) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname1/proxy/: foo (200; 5.241491ms) Jun 22 13:51:00.267: INFO: (19) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname2/proxy/: bar (200; 5.130567ms) Jun 22 13:51:00.267: INFO: (19) /api/v1/namespaces/proxy-8721/services/proxy-service-ghhj4:portname1/proxy/: foo (200; 5.43928ms) Jun 22 13:51:00.267: INFO: (19) /api/v1/namespaces/proxy-8721/services/http:proxy-service-ghhj4:portname2/proxy/: bar (200; 5.367759ms) STEP: deleting ReplicationController proxy-service-ghhj4 in namespace proxy-8721, will wait for the garbage collector to delete the pods Jun 22 13:51:00.324: INFO: Deleting ReplicationController proxy-service-ghhj4 took: 5.719648ms Jun 22 13:51:00.625: INFO: Terminating ReplicationController proxy-service-ghhj4 pods took: 300.236665ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:51:04.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8721" for this suite. Jun 22 13:51:10.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:51:11.153: INFO: namespace proxy-8721 deletion completed in 6.210580841s • [SLOW TEST:20.398 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:51:11.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 13:51:11.349: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8be40a8c-399d-4493-a01d-aefdb953b933" in namespace "downward-api-3349" to be "success or failure" Jun 22 13:51:11.399: INFO: Pod "downwardapi-volume-8be40a8c-399d-4493-a01d-aefdb953b933": Phase="Pending", Reason="", readiness=false. Elapsed: 49.845413ms Jun 22 13:51:13.518: INFO: Pod "downwardapi-volume-8be40a8c-399d-4493-a01d-aefdb953b933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168558286s Jun 22 13:51:15.522: INFO: Pod "downwardapi-volume-8be40a8c-399d-4493-a01d-aefdb953b933": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172424532s Jun 22 13:51:17.525: INFO: Pod "downwardapi-volume-8be40a8c-399d-4493-a01d-aefdb953b933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.176266557s STEP: Saw pod success Jun 22 13:51:17.525: INFO: Pod "downwardapi-volume-8be40a8c-399d-4493-a01d-aefdb953b933" satisfied condition "success or failure" Jun 22 13:51:17.528: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8be40a8c-399d-4493-a01d-aefdb953b933 container client-container: STEP: delete the pod Jun 22 13:51:17.559: INFO: Waiting for pod downwardapi-volume-8be40a8c-399d-4493-a01d-aefdb953b933 to disappear Jun 22 13:51:17.570: INFO: Pod downwardapi-volume-8be40a8c-399d-4493-a01d-aefdb953b933 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:51:17.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3349" for this suite. Jun 22 13:51:23.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:51:23.658: INFO: namespace downward-api-3349 deletion completed in 6.085407768s • [SLOW TEST:12.505 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:51:23.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-e94358a9-9ac3-4829-8e35-b43f16b23c66 STEP: Creating a pod to test consume secrets Jun 22 13:51:23.872: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ce247a52-e15b-47bf-988d-5eab63ccfc8f" in namespace "projected-7297" to be "success or failure" Jun 22 13:51:23.924: INFO: Pod "pod-projected-secrets-ce247a52-e15b-47bf-988d-5eab63ccfc8f": Phase="Pending", Reason="", readiness=false. Elapsed: 51.388606ms Jun 22 13:51:25.928: INFO: Pod "pod-projected-secrets-ce247a52-e15b-47bf-988d-5eab63ccfc8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056211121s Jun 22 13:51:27.933: INFO: Pod "pod-projected-secrets-ce247a52-e15b-47bf-988d-5eab63ccfc8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061052434s Jun 22 13:51:29.938: INFO: Pod "pod-projected-secrets-ce247a52-e15b-47bf-988d-5eab63ccfc8f": Phase="Running", Reason="", readiness=true. Elapsed: 6.065602437s Jun 22 13:51:31.942: INFO: Pod "pod-projected-secrets-ce247a52-e15b-47bf-988d-5eab63ccfc8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069335001s STEP: Saw pod success Jun 22 13:51:31.942: INFO: Pod "pod-projected-secrets-ce247a52-e15b-47bf-988d-5eab63ccfc8f" satisfied condition "success or failure" Jun 22 13:51:31.944: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ce247a52-e15b-47bf-988d-5eab63ccfc8f container secret-volume-test: STEP: delete the pod Jun 22 13:51:32.019: INFO: Waiting for pod pod-projected-secrets-ce247a52-e15b-47bf-988d-5eab63ccfc8f to disappear Jun 22 13:51:32.055: INFO: Pod pod-projected-secrets-ce247a52-e15b-47bf-988d-5eab63ccfc8f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:51:32.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7297" for this suite. Jun 22 13:51:38.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:51:38.209: INFO: namespace projected-7297 deletion completed in 6.150657762s • [SLOW TEST:14.551 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:51:38.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 22 13:51:38.364: INFO: Waiting up to 5m0s for pod "pod-e0fe1ff8-da9b-41a0-9336-03a2908973e3" in namespace "emptydir-8441" to be "success or failure" Jun 22 13:51:38.392: INFO: Pod "pod-e0fe1ff8-da9b-41a0-9336-03a2908973e3": Phase="Pending", Reason="", readiness=false. Elapsed: 27.503152ms Jun 22 13:51:40.396: INFO: Pod "pod-e0fe1ff8-da9b-41a0-9336-03a2908973e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032062664s Jun 22 13:51:42.401: INFO: Pod "pod-e0fe1ff8-da9b-41a0-9336-03a2908973e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036485786s Jun 22 13:51:44.405: INFO: Pod "pod-e0fe1ff8-da9b-41a0-9336-03a2908973e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040899487s STEP: Saw pod success Jun 22 13:51:44.405: INFO: Pod "pod-e0fe1ff8-da9b-41a0-9336-03a2908973e3" satisfied condition "success or failure" Jun 22 13:51:44.408: INFO: Trying to get logs from node iruya-worker2 pod pod-e0fe1ff8-da9b-41a0-9336-03a2908973e3 container test-container: STEP: delete the pod Jun 22 13:51:44.676: INFO: Waiting for pod pod-e0fe1ff8-da9b-41a0-9336-03a2908973e3 to disappear Jun 22 13:51:44.714: INFO: Pod pod-e0fe1ff8-da9b-41a0-9336-03a2908973e3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:51:44.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8441" for this suite. Jun 22 13:51:50.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:51:50.891: INFO: namespace emptydir-8441 deletion completed in 6.172723743s • [SLOW TEST:12.682 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:51:50.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 13:51:50.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac2f7d62-93a1-40a0-8bc7-ede9c8d49b12" in namespace "downward-api-2194" to be "success or failure" Jun 22 13:51:51.038: INFO: Pod "downwardapi-volume-ac2f7d62-93a1-40a0-8bc7-ede9c8d49b12": Phase="Pending", Reason="", readiness=false. Elapsed: 39.50336ms Jun 22 13:51:53.043: INFO: Pod "downwardapi-volume-ac2f7d62-93a1-40a0-8bc7-ede9c8d49b12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044474686s Jun 22 13:51:55.047: INFO: Pod "downwardapi-volume-ac2f7d62-93a1-40a0-8bc7-ede9c8d49b12": Phase="Running", Reason="", readiness=true. Elapsed: 4.048569964s Jun 22 13:51:57.052: INFO: Pod "downwardapi-volume-ac2f7d62-93a1-40a0-8bc7-ede9c8d49b12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053025422s STEP: Saw pod success Jun 22 13:51:57.052: INFO: Pod "downwardapi-volume-ac2f7d62-93a1-40a0-8bc7-ede9c8d49b12" satisfied condition "success or failure" Jun 22 13:51:57.054: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ac2f7d62-93a1-40a0-8bc7-ede9c8d49b12 container client-container: STEP: delete the pod Jun 22 13:51:57.087: INFO: Waiting for pod downwardapi-volume-ac2f7d62-93a1-40a0-8bc7-ede9c8d49b12 to disappear Jun 22 13:51:57.110: INFO: Pod downwardapi-volume-ac2f7d62-93a1-40a0-8bc7-ede9c8d49b12 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:51:57.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2194" for this suite. Jun 22 13:52:03.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:52:03.227: INFO: namespace downward-api-2194 deletion completed in 6.113272732s • [SLOW TEST:12.335 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:52:03.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 22 13:52:03.367: INFO: Waiting up to 5m0s for pod "downward-api-03d4378f-5734-46a2-8eba-3fe6c87afa2e" in namespace "downward-api-3148" to be "success or failure" Jun 22 13:52:03.386: INFO: Pod "downward-api-03d4378f-5734-46a2-8eba-3fe6c87afa2e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.272425ms Jun 22 13:52:05.541: INFO: Pod "downward-api-03d4378f-5734-46a2-8eba-3fe6c87afa2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174222826s Jun 22 13:52:07.631: INFO: Pod "downward-api-03d4378f-5734-46a2-8eba-3fe6c87afa2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264024225s Jun 22 13:52:09.636: INFO: Pod "downward-api-03d4378f-5734-46a2-8eba-3fe6c87afa2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.268635087s STEP: Saw pod success Jun 22 13:52:09.636: INFO: Pod "downward-api-03d4378f-5734-46a2-8eba-3fe6c87afa2e" satisfied condition "success or failure" Jun 22 13:52:09.639: INFO: Trying to get logs from node iruya-worker pod downward-api-03d4378f-5734-46a2-8eba-3fe6c87afa2e container dapi-container: STEP: delete the pod Jun 22 13:52:09.813: INFO: Waiting for pod downward-api-03d4378f-5734-46a2-8eba-3fe6c87afa2e to disappear Jun 22 13:52:09.906: INFO: Pod downward-api-03d4378f-5734-46a2-8eba-3fe6c87afa2e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:52:09.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3148" for this suite. Jun 22 13:52:15.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:52:16.062: INFO: namespace downward-api-3148 deletion completed in 6.150933236s • [SLOW TEST:12.834 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:52:16.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-7947 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7947 STEP: Deleting pre-stop pod Jun 22 13:52:31.334: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:52:31.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7947" for this suite. Jun 22 13:53:15.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:53:15.540: INFO: namespace prestop-7947 deletion completed in 44.171224085s • [SLOW TEST:59.478 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:53:15.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 13:53:15.727: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.751695ms) Jun 22 13:53:15.730: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.966428ms) Jun 22 13:53:15.733: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.809662ms) Jun 22 13:53:15.736: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.880754ms) Jun 22 13:53:15.739: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.707925ms) Jun 22 13:53:15.742: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.790614ms) Jun 22 13:53:15.744: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.308779ms) Jun 22 13:53:15.746: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.579564ms) Jun 22 13:53:15.749: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.479935ms) Jun 22 13:53:15.752: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.821366ms) Jun 22 13:53:15.754: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.663727ms) Jun 22 13:53:15.757: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.879359ms) Jun 22 13:53:15.760: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.449285ms) Jun 22 13:53:15.762: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.173473ms) Jun 22 13:53:15.764: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.278569ms) Jun 22 13:53:15.767: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.186813ms) Jun 22 13:53:15.770: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.111377ms) Jun 22 13:53:15.773: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.874734ms) Jun 22 13:53:15.794: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 21.320503ms) Jun 22 13:53:15.797: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.912317ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:53:15.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-874" for this suite. Jun 22 13:53:21.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:53:21.908: INFO: namespace proxy-874 deletion completed in 6.108386356s • [SLOW TEST:6.367 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:53:21.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 22 13:53:22.087: INFO: Waiting up to 5m0s for pod "pod-8e195f1a-3b7e-40ba-ad0f-3eaf909bdcd0" in namespace "emptydir-2147" to be "success or failure" Jun 22 13:53:22.153: INFO: Pod "pod-8e195f1a-3b7e-40ba-ad0f-3eaf909bdcd0": Phase="Pending", Reason="", readiness=false. Elapsed: 66.340352ms Jun 22 13:53:24.195: INFO: Pod "pod-8e195f1a-3b7e-40ba-ad0f-3eaf909bdcd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107929769s Jun 22 13:53:26.231: INFO: Pod "pod-8e195f1a-3b7e-40ba-ad0f-3eaf909bdcd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144033812s Jun 22 13:53:28.237: INFO: Pod "pod-8e195f1a-3b7e-40ba-ad0f-3eaf909bdcd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15018284s STEP: Saw pod success Jun 22 13:53:28.237: INFO: Pod "pod-8e195f1a-3b7e-40ba-ad0f-3eaf909bdcd0" satisfied condition "success or failure" Jun 22 13:53:28.240: INFO: Trying to get logs from node iruya-worker pod pod-8e195f1a-3b7e-40ba-ad0f-3eaf909bdcd0 container test-container: STEP: delete the pod Jun 22 13:53:28.320: INFO: Waiting for pod pod-8e195f1a-3b7e-40ba-ad0f-3eaf909bdcd0 to disappear Jun 22 13:53:28.423: INFO: Pod pod-8e195f1a-3b7e-40ba-ad0f-3eaf909bdcd0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:53:28.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2147" for this suite. Jun 22 13:53:34.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:53:34.521: INFO: namespace emptydir-2147 deletion completed in 6.093105448s • [SLOW TEST:12.612 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:53:34.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 22 13:53:41.220: INFO: Successfully updated pod "annotationupdate1cf672c7-550f-4611-b5a7-87b1d60f910e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:53:43.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6987" for this suite. Jun 22 13:54:07.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:54:07.437: INFO: namespace downward-api-6987 deletion completed in 24.163463526s • [SLOW TEST:32.916 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:54:07.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-a29767ce-63f0-45b7-9d7f-c62f03d687a9 in namespace container-probe-37 Jun 22 13:54:13.715: INFO: Started pod busybox-a29767ce-63f0-45b7-9d7f-c62f03d687a9 in namespace container-probe-37 STEP: checking the pod's current state and verifying that restartCount is present Jun 22 13:54:13.717: INFO: Initial restart count of pod busybox-a29767ce-63f0-45b7-9d7f-c62f03d687a9 is 0 Jun 22 13:55:08.454: INFO: Restart count of pod container-probe-37/busybox-a29767ce-63f0-45b7-9d7f-c62f03d687a9 is now 1 (54.736923292s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:55:08.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-37" for this suite. Jun 22 13:55:14.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:55:14.732: INFO: namespace container-probe-37 deletion completed in 6.224156621s • [SLOW TEST:67.296 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:55:14.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3706.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3706.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3706.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 13:55:22.924: INFO: DNS probes using dns-test-1f168bb5-6218-4531-bfdf-64c6fe1652ca succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3706.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3706.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3706.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 13:55:31.052: INFO: File wheezy_udp@dns-test-service-3.dns-3706.svc.cluster.local from pod dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 13:55:31.054: INFO: File jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local from pod dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 13:55:31.054: INFO: Lookups using dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a failed for: [wheezy_udp@dns-test-service-3.dns-3706.svc.cluster.local jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local] Jun 22 13:55:36.060: INFO: File wheezy_udp@dns-test-service-3.dns-3706.svc.cluster.local from pod dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 13:55:36.064: INFO: File jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local from pod dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 13:55:36.064: INFO: Lookups using dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a failed for: [wheezy_udp@dns-test-service-3.dns-3706.svc.cluster.local jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local] Jun 22 13:55:41.059: INFO: File wheezy_udp@dns-test-service-3.dns-3706.svc.cluster.local from pod dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 13:55:41.063: INFO: File jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local from pod dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 13:55:41.063: INFO: Lookups using dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a failed for: [wheezy_udp@dns-test-service-3.dns-3706.svc.cluster.local jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local] Jun 22 13:55:46.059: INFO: File wheezy_udp@dns-test-service-3.dns-3706.svc.cluster.local from pod dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 13:55:46.063: INFO: File jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local from pod dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 13:55:46.063: INFO: Lookups using dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a failed for: [wheezy_udp@dns-test-service-3.dns-3706.svc.cluster.local jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local] Jun 22 13:55:51.062: INFO: File jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local from pod dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 22 13:55:51.062: INFO: Lookups using dns-3706/dns-test-35a279c0-5f51-4326-a367-c69485f30a5a failed for: [jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local] Jun 22 13:55:56.063: INFO: DNS probes using dns-test-35a279c0-5f51-4326-a367-c69485f30a5a succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3706.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3706.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3706.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3706.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 22 13:56:05.075: INFO: DNS probes using dns-test-613a20e8-7a6f-4ff1-8c84-64ee20421127 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:56:05.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3706" for this suite. Jun 22 13:56:13.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:56:13.696: INFO: namespace dns-3706 deletion completed in 8.227944204s • [SLOW TEST:58.963 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:56:13.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jun 22 13:56:13.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6257' Jun 22 13:56:14.164: INFO: stderr: "" Jun 22 13:56:14.164: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 13:56:14.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6257' Jun 22 13:56:14.266: INFO: stderr: "" Jun 22 13:56:14.266: INFO: stdout: "update-demo-nautilus-9x97k update-demo-nautilus-ppn2m " Jun 22 13:56:14.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x97k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6257' Jun 22 13:56:14.374: INFO: stderr: "" Jun 22 13:56:14.374: INFO: stdout: "" Jun 22 13:56:14.374: INFO: update-demo-nautilus-9x97k is created but not running Jun 22 13:56:19.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6257' Jun 22 13:56:19.499: INFO: stderr: "" Jun 22 13:56:19.499: INFO: stdout: "update-demo-nautilus-9x97k update-demo-nautilus-ppn2m " Jun 22 13:56:19.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x97k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6257' Jun 22 13:56:19.581: INFO: stderr: "" Jun 22 13:56:19.581: INFO: stdout: "" Jun 22 13:56:19.581: INFO: update-demo-nautilus-9x97k is created but not running Jun 22 13:56:24.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6257' Jun 22 13:56:24.698: INFO: stderr: "" Jun 22 13:56:24.698: INFO: stdout: "update-demo-nautilus-9x97k update-demo-nautilus-ppn2m " Jun 22 13:56:24.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x97k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6257' Jun 22 13:56:24.852: INFO: stderr: "" Jun 22 13:56:24.852: INFO: stdout: "true" Jun 22 13:56:24.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x97k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6257' Jun 22 13:56:24.948: INFO: stderr: "" Jun 22 13:56:24.948: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 13:56:24.948: INFO: validating pod update-demo-nautilus-9x97k Jun 22 13:56:24.964: INFO: got data: { "image": "nautilus.jpg" } Jun 22 13:56:24.964: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 13:56:24.964: INFO: update-demo-nautilus-9x97k is verified up and running Jun 22 13:56:24.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ppn2m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6257' Jun 22 13:56:25.047: INFO: stderr: "" Jun 22 13:56:25.047: INFO: stdout: "true" Jun 22 13:56:25.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ppn2m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6257' Jun 22 13:56:25.142: INFO: stderr: "" Jun 22 13:56:25.142: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 13:56:25.142: INFO: validating pod update-demo-nautilus-ppn2m Jun 22 13:56:25.196: INFO: got data: { "image": "nautilus.jpg" } Jun 22 13:56:25.196: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 13:56:25.196: INFO: update-demo-nautilus-ppn2m is verified up and running STEP: rolling-update to new replication controller Jun 22 13:56:25.283: INFO: scanned /root for discovery docs: Jun 22 13:56:25.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6257' Jun 22 13:56:50.482: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 22 13:56:50.482: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 13:56:50.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6257' Jun 22 13:56:50.633: INFO: stderr: "" Jun 22 13:56:50.633: INFO: stdout: "update-demo-kitten-4sr5g update-demo-kitten-gsmcn " Jun 22 13:56:50.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4sr5g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6257' Jun 22 13:56:50.728: INFO: stderr: "" Jun 22 13:56:50.728: INFO: stdout: "true" Jun 22 13:56:50.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4sr5g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6257' Jun 22 13:56:50.825: INFO: stderr: "" Jun 22 13:56:50.825: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 22 13:56:50.825: INFO: validating pod update-demo-kitten-4sr5g Jun 22 13:56:50.890: INFO: got data: { "image": "kitten.jpg" } Jun 22 13:56:50.891: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 22 13:56:50.891: INFO: update-demo-kitten-4sr5g is verified up and running Jun 22 13:56:50.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gsmcn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6257' Jun 22 13:56:50.989: INFO: stderr: "" Jun 22 13:56:50.989: INFO: stdout: "true" Jun 22 13:56:50.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gsmcn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6257' Jun 22 13:56:51.071: INFO: stderr: "" Jun 22 13:56:51.071: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 22 13:56:51.071: INFO: validating pod update-demo-kitten-gsmcn Jun 22 13:56:51.101: INFO: got data: { "image": "kitten.jpg" } Jun 22 13:56:51.101: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 22 13:56:51.101: INFO: update-demo-kitten-gsmcn is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:56:51.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6257" for this suite. Jun 22 13:57:13.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:57:13.202: INFO: namespace kubectl-6257 deletion completed in 22.096543401s • [SLOW TEST:59.505 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:57:13.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 22 13:57:13.320: INFO: Waiting up to 5m0s for pod "pod-f5a1f4c5-b88c-4b29-af0e-e58c6ce835d1" in namespace "emptydir-6053" to be "success or failure" Jun 22 13:57:13.341: INFO: Pod "pod-f5a1f4c5-b88c-4b29-af0e-e58c6ce835d1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.856816ms Jun 22 13:57:15.345: INFO: Pod "pod-f5a1f4c5-b88c-4b29-af0e-e58c6ce835d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025389707s Jun 22 13:57:17.350: INFO: Pod "pod-f5a1f4c5-b88c-4b29-af0e-e58c6ce835d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030092832s Jun 22 13:57:19.354: INFO: Pod "pod-f5a1f4c5-b88c-4b29-af0e-e58c6ce835d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034462849s STEP: Saw pod success Jun 22 13:57:19.354: INFO: Pod "pod-f5a1f4c5-b88c-4b29-af0e-e58c6ce835d1" satisfied condition "success or failure" Jun 22 13:57:19.357: INFO: Trying to get logs from node iruya-worker pod pod-f5a1f4c5-b88c-4b29-af0e-e58c6ce835d1 container test-container: STEP: delete the pod Jun 22 13:57:19.379: INFO: Waiting for pod pod-f5a1f4c5-b88c-4b29-af0e-e58c6ce835d1 to disappear Jun 22 13:57:19.384: INFO: Pod pod-f5a1f4c5-b88c-4b29-af0e-e58c6ce835d1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:57:19.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6053" for this suite. Jun 22 13:57:25.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:57:25.471: INFO: namespace emptydir-6053 deletion completed in 6.083353738s • [SLOW TEST:12.269 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:57:25.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 13:57:25.566: INFO: Waiting up to 5m0s for pod "downwardapi-volume-487ae743-0acc-42d9-b188-9f343bda6500" in namespace "projected-2647" to be "success or failure" Jun 22 13:57:25.588: INFO: Pod "downwardapi-volume-487ae743-0acc-42d9-b188-9f343bda6500": Phase="Pending", Reason="", readiness=false. Elapsed: 22.147214ms Jun 22 13:57:27.659: INFO: Pod "downwardapi-volume-487ae743-0acc-42d9-b188-9f343bda6500": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093095062s Jun 22 13:57:29.663: INFO: Pod "downwardapi-volume-487ae743-0acc-42d9-b188-9f343bda6500": Phase="Running", Reason="", readiness=true. Elapsed: 4.096997882s Jun 22 13:57:31.668: INFO: Pod "downwardapi-volume-487ae743-0acc-42d9-b188-9f343bda6500": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10151732s STEP: Saw pod success Jun 22 13:57:31.668: INFO: Pod "downwardapi-volume-487ae743-0acc-42d9-b188-9f343bda6500" satisfied condition "success or failure" Jun 22 13:57:31.671: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-487ae743-0acc-42d9-b188-9f343bda6500 container client-container: STEP: delete the pod Jun 22 13:57:31.691: INFO: Waiting for pod downwardapi-volume-487ae743-0acc-42d9-b188-9f343bda6500 to disappear Jun 22 13:57:31.696: INFO: Pod downwardapi-volume-487ae743-0acc-42d9-b188-9f343bda6500 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:57:31.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2647" for this suite. Jun 22 13:57:37.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:57:37.780: INFO: namespace projected-2647 deletion completed in 6.080528811s • [SLOW TEST:12.308 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:57:37.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:58:07.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9521" for this suite. Jun 22 13:58:13.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:58:13.544: INFO: namespace container-runtime-9521 deletion completed in 6.085827317s • [SLOW TEST:35.765 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:58:13.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 13:58:13.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6440e63a-c36f-4dbb-ab4a-34ea1dd16768" in namespace "projected-4690" to be "success or failure" Jun 22 13:58:13.666: INFO: Pod "downwardapi-volume-6440e63a-c36f-4dbb-ab4a-34ea1dd16768": Phase="Pending", Reason="", readiness=false. Elapsed: 35.843913ms Jun 22 13:58:15.671: INFO: Pod "downwardapi-volume-6440e63a-c36f-4dbb-ab4a-34ea1dd16768": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040270872s Jun 22 13:58:17.675: INFO: Pod "downwardapi-volume-6440e63a-c36f-4dbb-ab4a-34ea1dd16768": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044663215s STEP: Saw pod success Jun 22 13:58:17.675: INFO: Pod "downwardapi-volume-6440e63a-c36f-4dbb-ab4a-34ea1dd16768" satisfied condition "success or failure" Jun 22 13:58:17.679: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6440e63a-c36f-4dbb-ab4a-34ea1dd16768 container client-container: STEP: delete the pod Jun 22 13:58:17.740: INFO: Waiting for pod downwardapi-volume-6440e63a-c36f-4dbb-ab4a-34ea1dd16768 to disappear Jun 22 13:58:17.753: INFO: Pod downwardapi-volume-6440e63a-c36f-4dbb-ab4a-34ea1dd16768 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:58:17.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4690" for this suite. Jun 22 13:58:23.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:58:23.844: INFO: namespace projected-4690 deletion completed in 6.086473772s • [SLOW TEST:10.299 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:58:23.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-36171250-4584-4220-b5f4-24b9bb8b589c STEP: Creating a pod to test consume secrets Jun 22 13:58:23.960: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0ad05b45-e74f-4670-9343-8a8aff11e5d6" in namespace "projected-7536" to be "success or failure" Jun 22 13:58:23.963: INFO: Pod "pod-projected-secrets-0ad05b45-e74f-4670-9343-8a8aff11e5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.269192ms Jun 22 13:58:25.970: INFO: Pod "pod-projected-secrets-0ad05b45-e74f-4670-9343-8a8aff11e5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010373671s Jun 22 13:58:28.206: INFO: Pod "pod-projected-secrets-0ad05b45-e74f-4670-9343-8a8aff11e5d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.246248895s STEP: Saw pod success Jun 22 13:58:28.206: INFO: Pod "pod-projected-secrets-0ad05b45-e74f-4670-9343-8a8aff11e5d6" satisfied condition "success or failure" Jun 22 13:58:28.209: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-0ad05b45-e74f-4670-9343-8a8aff11e5d6 container projected-secret-volume-test: STEP: delete the pod Jun 22 13:58:28.239: INFO: Waiting for pod pod-projected-secrets-0ad05b45-e74f-4670-9343-8a8aff11e5d6 to disappear Jun 22 13:58:28.254: INFO: Pod pod-projected-secrets-0ad05b45-e74f-4670-9343-8a8aff11e5d6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:58:28.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7536" for this suite. Jun 22 13:58:34.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:58:34.358: INFO: namespace projected-7536 deletion completed in 6.100797567s • [SLOW TEST:10.514 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:58:34.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 22 13:58:34.399: INFO: Waiting up to 5m0s for pod "downward-api-00810ea2-cd67-4a41-8488-eac198315434" in namespace "downward-api-5712" to be "success or failure" Jun 22 13:58:34.414: INFO: Pod "downward-api-00810ea2-cd67-4a41-8488-eac198315434": Phase="Pending", Reason="", readiness=false. Elapsed: 14.935912ms Jun 22 13:58:36.419: INFO: Pod "downward-api-00810ea2-cd67-4a41-8488-eac198315434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019575853s Jun 22 13:58:38.423: INFO: Pod "downward-api-00810ea2-cd67-4a41-8488-eac198315434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024036222s STEP: Saw pod success Jun 22 13:58:38.423: INFO: Pod "downward-api-00810ea2-cd67-4a41-8488-eac198315434" satisfied condition "success or failure" Jun 22 13:58:38.426: INFO: Trying to get logs from node iruya-worker2 pod downward-api-00810ea2-cd67-4a41-8488-eac198315434 container dapi-container: STEP: delete the pod Jun 22 13:58:38.447: INFO: Waiting for pod downward-api-00810ea2-cd67-4a41-8488-eac198315434 to disappear Jun 22 13:58:38.451: INFO: Pod downward-api-00810ea2-cd67-4a41-8488-eac198315434 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:58:38.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5712" for this suite. Jun 22 13:58:44.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:58:44.548: INFO: namespace downward-api-5712 deletion completed in 6.092375165s • [SLOW TEST:10.189 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:58:44.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 22 13:58:44.647: INFO: Waiting up to 5m0s for pod "pod-686eaebf-562a-490c-9aa1-b70ef777a3aa" in namespace "emptydir-8729" to be "success or failure" Jun 22 13:58:44.650: INFO: Pod "pod-686eaebf-562a-490c-9aa1-b70ef777a3aa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.463151ms Jun 22 13:58:46.655: INFO: Pod "pod-686eaebf-562a-490c-9aa1-b70ef777a3aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007759947s Jun 22 13:58:48.659: INFO: Pod "pod-686eaebf-562a-490c-9aa1-b70ef777a3aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011801251s STEP: Saw pod success Jun 22 13:58:48.659: INFO: Pod "pod-686eaebf-562a-490c-9aa1-b70ef777a3aa" satisfied condition "success or failure" Jun 22 13:58:48.661: INFO: Trying to get logs from node iruya-worker pod pod-686eaebf-562a-490c-9aa1-b70ef777a3aa container test-container: STEP: delete the pod Jun 22 13:58:48.734: INFO: Waiting for pod pod-686eaebf-562a-490c-9aa1-b70ef777a3aa to disappear Jun 22 13:58:48.739: INFO: Pod pod-686eaebf-562a-490c-9aa1-b70ef777a3aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:58:48.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8729" for this suite. Jun 22 13:58:54.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:58:54.842: INFO: namespace emptydir-8729 deletion completed in 6.099225504s • [SLOW TEST:10.293 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:58:54.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 22 13:59:02.068: INFO: 0 pods remaining Jun 22 13:59:02.068: INFO: 0 pods has nil DeletionTimestamp Jun 22 13:59:02.068: INFO: Jun 22 13:59:03.243: INFO: 0 pods remaining Jun 22 13:59:03.243: INFO: 0 pods has nil DeletionTimestamp Jun 22 13:59:03.243: INFO: STEP: Gathering metrics W0622 13:59:04.303951 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 13:59:04.304: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:59:04.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3057" for this suite. Jun 22 13:59:10.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:59:10.685: INFO: namespace gc-3057 deletion completed in 6.377884029s • [SLOW TEST:15.843 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:59:10.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-4e3bda02-3445-4454-abb0-2b145fe1543f STEP: Creating a pod to test consume secrets Jun 22 13:59:10.787: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f1dc530b-01d3-40bc-9ba3-d090b6d14fe8" in namespace "projected-6816" to be "success or failure" Jun 22 13:59:10.800: INFO: Pod "pod-projected-secrets-f1dc530b-01d3-40bc-9ba3-d090b6d14fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.743327ms Jun 22 13:59:12.852: INFO: Pod "pod-projected-secrets-f1dc530b-01d3-40bc-9ba3-d090b6d14fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064458081s Jun 22 13:59:14.855: INFO: Pod "pod-projected-secrets-f1dc530b-01d3-40bc-9ba3-d090b6d14fe8": Phase="Running", Reason="", readiness=true. Elapsed: 4.067771795s Jun 22 13:59:16.859: INFO: Pod "pod-projected-secrets-f1dc530b-01d3-40bc-9ba3-d090b6d14fe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072123128s STEP: Saw pod success Jun 22 13:59:16.859: INFO: Pod "pod-projected-secrets-f1dc530b-01d3-40bc-9ba3-d090b6d14fe8" satisfied condition "success or failure" Jun 22 13:59:16.862: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-f1dc530b-01d3-40bc-9ba3-d090b6d14fe8 container projected-secret-volume-test: STEP: delete the pod Jun 22 13:59:16.895: INFO: Waiting for pod pod-projected-secrets-f1dc530b-01d3-40bc-9ba3-d090b6d14fe8 to disappear Jun 22 13:59:16.911: INFO: Pod pod-projected-secrets-f1dc530b-01d3-40bc-9ba3-d090b6d14fe8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:59:16.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6816" for this suite. Jun 22 13:59:22.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 13:59:23.013: INFO: namespace projected-6816 deletion completed in 6.097992509s • [SLOW TEST:12.328 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 13:59:23.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9511 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 13:59:23.066: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 13:59:49.188: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.202:8080/dial?request=hostName&protocol=udp&host=10.244.2.201&port=8081&tries=1'] Namespace:pod-network-test-9511 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:59:49.188: INFO: >>> kubeConfig: /root/.kube/config I0622 13:59:49.223489 7 log.go:172] (0xc003098580) (0xc00222db80) Create stream I0622 13:59:49.223532 7 log.go:172] (0xc003098580) (0xc00222db80) Stream added, broadcasting: 1 I0622 13:59:49.225457 7 log.go:172] (0xc003098580) Reply frame received for 1 I0622 13:59:49.225513 7 log.go:172] (0xc003098580) (0xc0022dabe0) Create stream I0622 13:59:49.225525 7 log.go:172] (0xc003098580) (0xc0022dabe0) Stream added, broadcasting: 3 I0622 13:59:49.226366 7 log.go:172] (0xc003098580) Reply frame received for 3 I0622 13:59:49.226406 7 log.go:172] (0xc003098580) (0xc0022dac80) Create stream I0622 13:59:49.226417 7 log.go:172] (0xc003098580) (0xc0022dac80) Stream added, broadcasting: 5 I0622 13:59:49.227283 7 log.go:172] (0xc003098580) Reply frame received for 5 I0622 13:59:49.382157 7 log.go:172] (0xc003098580) Data frame received for 3 I0622 13:59:49.382200 7 log.go:172] (0xc0022dabe0) (3) Data frame handling I0622 13:59:49.382218 7 log.go:172] (0xc0022dabe0) (3) Data frame sent I0622 13:59:49.382231 7 log.go:172] (0xc003098580) Data frame received for 3 I0622 13:59:49.382241 7 log.go:172] (0xc0022dabe0) (3) Data frame handling I0622 13:59:49.382515 7 log.go:172] (0xc003098580) Data frame received for 5 I0622 13:59:49.382549 7 log.go:172] (0xc0022dac80) (5) Data frame handling I0622 13:59:49.383821 7 log.go:172] (0xc003098580) Data frame received for 1 I0622 13:59:49.383846 7 log.go:172] (0xc00222db80) (1) Data frame handling I0622 13:59:49.383903 7 log.go:172] (0xc00222db80) (1) Data frame sent I0622 13:59:49.383931 7 log.go:172] (0xc003098580) (0xc00222db80) Stream removed, broadcasting: 1 I0622 13:59:49.384021 7 log.go:172] (0xc003098580) Go away received I0622 13:59:49.384066 7 log.go:172] (0xc003098580) (0xc00222db80) Stream removed, broadcasting: 1 I0622 13:59:49.384093 7 log.go:172] (0xc003098580) (0xc0022dabe0) Stream removed, broadcasting: 3 I0622 13:59:49.384100 7 log.go:172] (0xc003098580) (0xc0022dac80) Stream removed, broadcasting: 5 Jun 22 13:59:49.384: INFO: Waiting for endpoints: map[] Jun 22 13:59:49.387: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.202:8080/dial?request=hostName&protocol=udp&host=10.244.1.115&port=8081&tries=1'] Namespace:pod-network-test-9511 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 13:59:49.387: INFO: >>> kubeConfig: /root/.kube/config I0622 13:59:49.413940 7 log.go:172] (0xc0032be9a0) (0xc000e9ba40) Create stream I0622 13:59:49.413996 7 log.go:172] (0xc0032be9a0) (0xc000e9ba40) Stream added, broadcasting: 1 I0622 13:59:49.416536 7 log.go:172] (0xc0032be9a0) Reply frame received for 1 I0622 13:59:49.416582 7 log.go:172] (0xc0032be9a0) (0xc00222de00) Create stream I0622 13:59:49.416599 7 log.go:172] (0xc0032be9a0) (0xc00222de00) Stream added, broadcasting: 3 I0622 13:59:49.417873 7 log.go:172] (0xc0032be9a0) Reply frame received for 3 I0622 13:59:49.417919 7 log.go:172] (0xc0032be9a0) (0xc00222dea0) Create stream I0622 13:59:49.417934 7 log.go:172] (0xc0032be9a0) (0xc00222dea0) Stream added, broadcasting: 5 I0622 13:59:49.418917 7 log.go:172] (0xc0032be9a0) Reply frame received for 5 I0622 13:59:49.497546 7 log.go:172] (0xc0032be9a0) Data frame received for 3 I0622 13:59:49.497579 7 log.go:172] (0xc00222de00) (3) Data frame handling I0622 13:59:49.497604 7 log.go:172] (0xc00222de00) (3) Data frame sent I0622 13:59:49.498263 7 log.go:172] (0xc0032be9a0) Data frame received for 5 I0622 13:59:49.498324 7 log.go:172] (0xc00222dea0) (5) Data frame handling I0622 13:59:49.498356 7 log.go:172] (0xc0032be9a0) Data frame received for 3 I0622 13:59:49.498378 7 log.go:172] (0xc00222de00) (3) Data frame handling I0622 13:59:49.499899 7 log.go:172] (0xc0032be9a0) Data frame received for 1 I0622 13:59:49.499940 7 log.go:172] (0xc000e9ba40) (1) Data frame handling I0622 13:59:49.499956 7 log.go:172] (0xc000e9ba40) (1) Data frame sent I0622 13:59:49.499970 7 log.go:172] (0xc0032be9a0) (0xc000e9ba40) Stream removed, broadcasting: 1 I0622 13:59:49.499982 7 log.go:172] (0xc0032be9a0) Go away received I0622 13:59:49.500124 7 log.go:172] (0xc0032be9a0) (0xc000e9ba40) Stream removed, broadcasting: 1 I0622 13:59:49.500145 7 log.go:172] (0xc0032be9a0) (0xc00222de00) Stream removed, broadcasting: 3 I0622 13:59:49.500160 7 log.go:172] (0xc0032be9a0) (0xc00222dea0) Stream removed, broadcasting: 5 Jun 22 13:59:49.500: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 13:59:49.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9511" for this suite. Jun 22 14:00:13.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:00:13.599: INFO: namespace pod-network-test-9511 deletion completed in 24.095628955s • [SLOW TEST:50.586 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:00:13.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 14:00:13.722: INFO: Create a RollingUpdate DaemonSet Jun 22 14:00:13.725: INFO: Check that daemon pods launch on every node of the cluster Jun 22 14:00:13.727: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:00:13.732: INFO: Number of nodes with available pods: 0 Jun 22 14:00:13.732: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:00:14.736: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:00:14.739: INFO: Number of nodes with available pods: 0 Jun 22 14:00:14.739: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:00:15.968: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:00:15.971: INFO: Number of nodes with available pods: 0 Jun 22 14:00:15.971: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:00:16.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:00:16.742: INFO: Number of nodes with available pods: 0 Jun 22 14:00:16.742: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:00:17.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:00:17.741: INFO: Number of nodes with available pods: 0 Jun 22 14:00:17.741: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:00:18.737: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:00:18.740: INFO: Number of nodes with available pods: 2 Jun 22 14:00:18.740: INFO: Number of running nodes: 2, number of available pods: 2 Jun 22 14:00:18.740: INFO: Update the DaemonSet to trigger a rollout Jun 22 14:00:18.746: INFO: Updating DaemonSet daemon-set Jun 22 14:00:32.776: INFO: Roll back the DaemonSet before rollout is complete Jun 22 14:00:32.782: INFO: Updating DaemonSet daemon-set Jun 22 14:00:32.782: INFO: Make sure DaemonSet rollback is complete Jun 22 14:00:32.789: INFO: Wrong image for pod: daemon-set-j5r9d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 22 14:00:32.790: INFO: Pod daemon-set-j5r9d is not available Jun 22 14:00:32.796: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:00:33.801: INFO: Wrong image for pod: daemon-set-j5r9d. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 22 14:00:33.801: INFO: Pod daemon-set-j5r9d is not available Jun 22 14:00:33.805: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:00:34.824: INFO: Pod daemon-set-9dv6s is not available Jun 22 14:00:34.827: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1776, will wait for the garbage collector to delete the pods Jun 22 14:00:34.897: INFO: Deleting DaemonSet.extensions daemon-set took: 7.099262ms Jun 22 14:00:35.198: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.233431ms Jun 22 14:00:42.201: INFO: Number of nodes with available pods: 0 Jun 22 14:00:42.201: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 14:00:42.203: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1776/daemonsets","resourceVersion":"17864523"},"items":null} Jun 22 14:00:42.205: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1776/pods","resourceVersion":"17864523"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:00:42.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1776" for this suite. Jun 22 14:00:48.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:00:48.303: INFO: namespace daemonsets-1776 deletion completed in 6.088761188s • [SLOW TEST:34.703 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:00:48.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jun 22 14:00:48.443: INFO: Waiting up to 5m0s for pod "pod-731e5aeb-442a-4274-96b0-073cfba026ec" in namespace "emptydir-6647" to be "success or failure" Jun 22 14:00:48.475: INFO: Pod "pod-731e5aeb-442a-4274-96b0-073cfba026ec": Phase="Pending", Reason="", readiness=false. Elapsed: 32.399714ms Jun 22 14:00:50.537: INFO: Pod "pod-731e5aeb-442a-4274-96b0-073cfba026ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094281134s Jun 22 14:00:52.541: INFO: Pod "pod-731e5aeb-442a-4274-96b0-073cfba026ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098407337s STEP: Saw pod success Jun 22 14:00:52.541: INFO: Pod "pod-731e5aeb-442a-4274-96b0-073cfba026ec" satisfied condition "success or failure" Jun 22 14:00:52.544: INFO: Trying to get logs from node iruya-worker pod pod-731e5aeb-442a-4274-96b0-073cfba026ec container test-container: STEP: delete the pod Jun 22 14:00:52.562: INFO: Waiting for pod pod-731e5aeb-442a-4274-96b0-073cfba026ec to disappear Jun 22 14:00:52.566: INFO: Pod pod-731e5aeb-442a-4274-96b0-073cfba026ec no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:00:52.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6647" for this suite. Jun 22 14:00:58.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:00:58.659: INFO: namespace emptydir-6647 deletion completed in 6.089836505s • [SLOW TEST:10.356 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:00:58.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 14:00:58.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd372657-398c-430f-9081-f9644d9c89c9" in namespace "downward-api-2980" to be "success or failure" Jun 22 14:00:58.754: INFO: Pod "downwardapi-volume-fd372657-398c-430f-9081-f9644d9c89c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036813ms Jun 22 14:01:00.758: INFO: Pod "downwardapi-volume-fd372657-398c-430f-9081-f9644d9c89c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008211198s Jun 22 14:01:02.762: INFO: Pod "downwardapi-volume-fd372657-398c-430f-9081-f9644d9c89c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012595436s STEP: Saw pod success Jun 22 14:01:02.762: INFO: Pod "downwardapi-volume-fd372657-398c-430f-9081-f9644d9c89c9" satisfied condition "success or failure" Jun 22 14:01:02.765: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-fd372657-398c-430f-9081-f9644d9c89c9 container client-container: STEP: delete the pod Jun 22 14:01:02.796: INFO: Waiting for pod downwardapi-volume-fd372657-398c-430f-9081-f9644d9c89c9 to disappear Jun 22 14:01:02.802: INFO: Pod downwardapi-volume-fd372657-398c-430f-9081-f9644d9c89c9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:01:02.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2980" for this suite. Jun 22 14:01:08.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:01:08.930: INFO: namespace downward-api-2980 deletion completed in 6.125547398s • [SLOW TEST:10.271 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:01:08.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 14:01:35.030: INFO: Container started at 2020-06-22 14:01:11 +0000 UTC, pod became ready at 2020-06-22 14:01:33 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:01:35.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6536" for this suite. Jun 22 14:01:57.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:01:57.126: INFO: namespace container-probe-6536 deletion completed in 22.091103588s • [SLOW TEST:48.195 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:01:57.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 22 14:01:57.206: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5710,SelfLink:/api/v1/namespaces/watch-5710/configmaps/e2e-watch-test-label-changed,UID:0ab9d58f-424d-4b86-9bd9-d37910c13dc2,ResourceVersion:17864771,Generation:0,CreationTimestamp:2020-06-22 14:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 14:01:57.207: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5710,SelfLink:/api/v1/namespaces/watch-5710/configmaps/e2e-watch-test-label-changed,UID:0ab9d58f-424d-4b86-9bd9-d37910c13dc2,ResourceVersion:17864772,Generation:0,CreationTimestamp:2020-06-22 14:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 22 14:01:57.207: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5710,SelfLink:/api/v1/namespaces/watch-5710/configmaps/e2e-watch-test-label-changed,UID:0ab9d58f-424d-4b86-9bd9-d37910c13dc2,ResourceVersion:17864773,Generation:0,CreationTimestamp:2020-06-22 14:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 22 14:02:07.235: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5710,SelfLink:/api/v1/namespaces/watch-5710/configmaps/e2e-watch-test-label-changed,UID:0ab9d58f-424d-4b86-9bd9-d37910c13dc2,ResourceVersion:17864796,Generation:0,CreationTimestamp:2020-06-22 14:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 14:02:07.236: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5710,SelfLink:/api/v1/namespaces/watch-5710/configmaps/e2e-watch-test-label-changed,UID:0ab9d58f-424d-4b86-9bd9-d37910c13dc2,ResourceVersion:17864797,Generation:0,CreationTimestamp:2020-06-22 14:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 22 14:02:07.236: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5710,SelfLink:/api/v1/namespaces/watch-5710/configmaps/e2e-watch-test-label-changed,UID:0ab9d58f-424d-4b86-9bd9-d37910c13dc2,ResourceVersion:17864798,Generation:0,CreationTimestamp:2020-06-22 14:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:02:07.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5710" for this suite. Jun 22 14:02:13.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:02:13.335: INFO: namespace watch-5710 deletion completed in 6.095241588s • [SLOW TEST:16.209 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:02:13.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 14:02:13.431: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 22 14:02:13.444: INFO: Number of nodes with available pods: 0 Jun 22 14:02:13.444: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 22 14:02:13.498: INFO: Number of nodes with available pods: 0 Jun 22 14:02:13.498: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:14.502: INFO: Number of nodes with available pods: 0 Jun 22 14:02:14.502: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:15.502: INFO: Number of nodes with available pods: 0 Jun 22 14:02:15.502: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:16.503: INFO: Number of nodes with available pods: 0 Jun 22 14:02:16.503: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:17.502: INFO: Number of nodes with available pods: 1 Jun 22 14:02:17.502: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 22 14:02:17.534: INFO: Number of nodes with available pods: 1 Jun 22 14:02:17.534: INFO: Number of running nodes: 0, number of available pods: 1 Jun 22 14:02:18.539: INFO: Number of nodes with available pods: 0 Jun 22 14:02:18.539: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 22 14:02:18.575: INFO: Number of nodes with available pods: 0 Jun 22 14:02:18.575: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:19.731: INFO: Number of nodes with available pods: 0 Jun 22 14:02:19.731: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:20.579: INFO: Number of nodes with available pods: 0 Jun 22 14:02:20.579: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:21.579: INFO: Number of nodes with available pods: 0 Jun 22 14:02:21.579: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:22.580: INFO: Number of nodes with available pods: 0 Jun 22 14:02:22.580: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:23.579: INFO: Number of nodes with available pods: 0 Jun 22 14:02:23.579: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:24.580: INFO: Number of nodes with available pods: 0 Jun 22 14:02:24.580: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:25.580: INFO: Number of nodes with available pods: 0 Jun 22 14:02:25.580: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:26.579: INFO: Number of nodes with available pods: 0 Jun 22 14:02:26.579: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:27.582: INFO: Number of nodes with available pods: 0 Jun 22 14:02:27.582: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:28.579: INFO: Number of nodes with available pods: 0 Jun 22 14:02:28.579: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:29.579: INFO: Number of nodes with available pods: 0 Jun 22 14:02:29.579: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:30.585: INFO: Number of nodes with available pods: 0 Jun 22 14:02:30.585: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:31.586: INFO: Number of nodes with available pods: 0 Jun 22 14:02:31.586: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:32.580: INFO: Number of nodes with available pods: 0 Jun 22 14:02:32.580: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:33.578: INFO: Number of nodes with available pods: 0 Jun 22 14:02:33.578: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:34.580: INFO: Number of nodes with available pods: 0 Jun 22 14:02:34.580: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:02:35.580: INFO: Number of nodes with available pods: 1 Jun 22 14:02:35.580: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9741, will wait for the garbage collector to delete the pods Jun 22 14:02:35.645: INFO: Deleting DaemonSet.extensions daemon-set took: 6.559552ms Jun 22 14:02:35.945: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.320429ms Jun 22 14:02:40.350: INFO: Number of nodes with available pods: 0 Jun 22 14:02:40.350: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 14:02:40.352: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9741/daemonsets","resourceVersion":"17864918"},"items":null} Jun 22 14:02:40.355: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9741/pods","resourceVersion":"17864918"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:02:40.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9741" for this suite. Jun 22 14:02:46.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:02:46.472: INFO: namespace daemonsets-9741 deletion completed in 6.090887791s • [SLOW TEST:33.136 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:02:46.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 22 14:02:54.639: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:02:54.648: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:02:56.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:02:56.652: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:02:58.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:02:58.653: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:00.648: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:00.653: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:02.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:02.653: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:04.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:04.653: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:06.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:06.654: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:08.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:08.653: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:10.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:10.653: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:12.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:12.653: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:14.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:14.652: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:16.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:16.652: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:18.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:18.665: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:20.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:20.653: INFO: Pod pod-with-poststart-exec-hook still exists Jun 22 14:03:22.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 22 14:03:22.653: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:03:22.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8082" for this suite. Jun 22 14:03:44.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:03:44.787: INFO: namespace container-lifecycle-hook-8082 deletion completed in 22.13036675s • [SLOW TEST:58.314 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:03:44.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4503 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 14:03:44.824: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 14:04:10.942: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.211 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4503 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 14:04:10.942: INFO: >>> kubeConfig: /root/.kube/config I0622 14:04:10.975971 7 log.go:172] (0xc0025aa580) (0xc001c90500) Create stream I0622 14:04:10.975997 7 log.go:172] (0xc0025aa580) (0xc001c90500) Stream added, broadcasting: 1 I0622 14:04:10.978903 7 log.go:172] (0xc0025aa580) Reply frame received for 1 I0622 14:04:10.978947 7 log.go:172] (0xc0025aa580) (0xc0003945a0) Create stream I0622 14:04:10.978963 7 log.go:172] (0xc0025aa580) (0xc0003945a0) Stream added, broadcasting: 3 I0622 14:04:10.979915 7 log.go:172] (0xc0025aa580) Reply frame received for 3 I0622 14:04:10.979948 7 log.go:172] (0xc0025aa580) (0xc0012a7c20) Create stream I0622 14:04:10.979957 7 log.go:172] (0xc0025aa580) (0xc0012a7c20) Stream added, broadcasting: 5 I0622 14:04:10.980936 7 log.go:172] (0xc0025aa580) Reply frame received for 5 I0622 14:04:12.059253 7 log.go:172] (0xc0025aa580) Data frame received for 3 I0622 14:04:12.059290 7 log.go:172] (0xc0003945a0) (3) Data frame handling I0622 14:04:12.059312 7 log.go:172] (0xc0003945a0) (3) Data frame sent I0622 14:04:12.059323 7 log.go:172] (0xc0025aa580) Data frame received for 3 I0622 14:04:12.059360 7 log.go:172] (0xc0003945a0) (3) Data frame handling I0622 14:04:12.059648 7 log.go:172] (0xc0025aa580) Data frame received for 5 I0622 14:04:12.059671 7 log.go:172] (0xc0012a7c20) (5) Data frame handling I0622 14:04:12.062053 7 log.go:172] (0xc0025aa580) Data frame received for 1 I0622 14:04:12.062086 7 log.go:172] (0xc001c90500) (1) Data frame handling I0622 14:04:12.062117 7 log.go:172] (0xc001c90500) (1) Data frame sent I0622 14:04:12.062195 7 log.go:172] (0xc0025aa580) (0xc001c90500) Stream removed, broadcasting: 1 I0622 14:04:12.062240 7 log.go:172] (0xc0025aa580) Go away received I0622 14:04:12.062357 7 log.go:172] (0xc0025aa580) (0xc001c90500) Stream removed, broadcasting: 1 I0622 14:04:12.062402 7 log.go:172] (0xc0025aa580) (0xc0003945a0) Stream removed, broadcasting: 3 I0622 14:04:12.062427 7 log.go:172] (0xc0025aa580) (0xc0012a7c20) Stream removed, broadcasting: 5 Jun 22 14:04:12.062: INFO: Found all expected endpoints: [netserver-0] Jun 22 14:04:12.066: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.119 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4503 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 14:04:12.066: INFO: >>> kubeConfig: /root/.kube/config I0622 14:04:12.094673 7 log.go:172] (0xc0032ba0b0) (0xc000d7eb40) Create stream I0622 14:04:12.094709 7 log.go:172] (0xc0032ba0b0) (0xc000d7eb40) Stream added, broadcasting: 1 I0622 14:04:12.096322 7 log.go:172] (0xc0032ba0b0) Reply frame received for 1 I0622 14:04:12.096364 7 log.go:172] (0xc0032ba0b0) (0xc000d7ec80) Create stream I0622 14:04:12.096382 7 log.go:172] (0xc0032ba0b0) (0xc000d7ec80) Stream added, broadcasting: 3 I0622 14:04:12.097544 7 log.go:172] (0xc0032ba0b0) Reply frame received for 3 I0622 14:04:12.097569 7 log.go:172] (0xc0032ba0b0) (0xc001c905a0) Create stream I0622 14:04:12.097597 7 log.go:172] (0xc0032ba0b0) (0xc001c905a0) Stream added, broadcasting: 5 I0622 14:04:12.098530 7 log.go:172] (0xc0032ba0b0) Reply frame received for 5 I0622 14:04:13.184659 7 log.go:172] (0xc0032ba0b0) Data frame received for 3 I0622 14:04:13.184696 7 log.go:172] (0xc000d7ec80) (3) Data frame handling I0622 14:04:13.184715 7 log.go:172] (0xc000d7ec80) (3) Data frame sent I0622 14:04:13.184957 7 log.go:172] (0xc0032ba0b0) Data frame received for 3 I0622 14:04:13.184983 7 log.go:172] (0xc000d7ec80) (3) Data frame handling I0622 14:04:13.185037 7 log.go:172] (0xc0032ba0b0) Data frame received for 5 I0622 14:04:13.185056 7 log.go:172] (0xc001c905a0) (5) Data frame handling I0622 14:04:13.186662 7 log.go:172] (0xc0032ba0b0) Data frame received for 1 I0622 14:04:13.186684 7 log.go:172] (0xc000d7eb40) (1) Data frame handling I0622 14:04:13.186703 7 log.go:172] (0xc000d7eb40) (1) Data frame sent I0622 14:04:13.186719 7 log.go:172] (0xc0032ba0b0) (0xc000d7eb40) Stream removed, broadcasting: 1 I0622 14:04:13.186731 7 log.go:172] (0xc0032ba0b0) Go away received I0622 14:04:13.186942 7 log.go:172] (0xc0032ba0b0) (0xc000d7eb40) Stream removed, broadcasting: 1 I0622 14:04:13.186980 7 log.go:172] (0xc0032ba0b0) (0xc000d7ec80) Stream removed, broadcasting: 3 I0622 14:04:13.186999 7 log.go:172] (0xc0032ba0b0) (0xc001c905a0) Stream removed, broadcasting: 5 Jun 22 14:04:13.187: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:04:13.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4503" for this suite. Jun 22 14:04:37.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:04:37.294: INFO: namespace pod-network-test-4503 deletion completed in 24.10350326s • [SLOW TEST:52.507 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:04:37.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 22 14:04:41.917: INFO: Successfully updated pod "labelsupdatee5ae0f3c-34b7-4a87-86e1-5c5d93ff1f45" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:04:43.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-269" for this suite. Jun 22 14:05:05.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:05:06.032: INFO: namespace downward-api-269 deletion completed in 22.090827956s • [SLOW TEST:28.738 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:05:06.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-a43baa3f-9a16-4f62-b91c-475669c54ff7 in namespace container-probe-4817 Jun 22 14:05:10.129: INFO: Started pod liveness-a43baa3f-9a16-4f62-b91c-475669c54ff7 in namespace container-probe-4817 STEP: checking the pod's current state and verifying that restartCount is present Jun 22 14:05:10.132: INFO: Initial restart count of pod liveness-a43baa3f-9a16-4f62-b91c-475669c54ff7 is 0 Jun 22 14:05:24.166: INFO: Restart count of pod container-probe-4817/liveness-a43baa3f-9a16-4f62-b91c-475669c54ff7 is now 1 (14.033665437s elapsed) Jun 22 14:05:44.212: INFO: Restart count of pod container-probe-4817/liveness-a43baa3f-9a16-4f62-b91c-475669c54ff7 is now 2 (34.079314777s elapsed) Jun 22 14:06:04.259: INFO: Restart count of pod container-probe-4817/liveness-a43baa3f-9a16-4f62-b91c-475669c54ff7 is now 3 (54.126454116s elapsed) Jun 22 14:06:24.302: INFO: Restart count of pod container-probe-4817/liveness-a43baa3f-9a16-4f62-b91c-475669c54ff7 is now 4 (1m14.169513112s elapsed) Jun 22 14:07:24.428: INFO: Restart count of pod container-probe-4817/liveness-a43baa3f-9a16-4f62-b91c-475669c54ff7 is now 5 (2m14.295664585s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:07:24.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4817" for this suite. Jun 22 14:07:30.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:07:30.545: INFO: namespace container-probe-4817 deletion completed in 6.102057399s • [SLOW TEST:144.513 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:07:30.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 14:07:30.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-875' Jun 22 14:07:33.186: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 14:07:33.186: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jun 22 14:07:33.228: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-2nrts] Jun 22 14:07:33.228: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-2nrts" in namespace "kubectl-875" to be "running and ready" Jun 22 14:07:33.230: INFO: Pod "e2e-test-nginx-rc-2nrts": Phase="Pending", Reason="", readiness=false. Elapsed: 2.364511ms Jun 22 14:07:35.234: INFO: Pod "e2e-test-nginx-rc-2nrts": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006225445s Jun 22 14:07:37.239: INFO: Pod "e2e-test-nginx-rc-2nrts": Phase="Running", Reason="", readiness=true. Elapsed: 4.011035243s Jun 22 14:07:37.239: INFO: Pod "e2e-test-nginx-rc-2nrts" satisfied condition "running and ready" Jun 22 14:07:37.239: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-2nrts] Jun 22 14:07:37.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-875' Jun 22 14:07:37.407: INFO: stderr: "" Jun 22 14:07:37.407: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jun 22 14:07:37.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-875' Jun 22 14:07:37.508: INFO: stderr: "" Jun 22 14:07:37.508: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:07:37.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-875" for this suite. Jun 22 14:07:59.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:07:59.597: INFO: namespace kubectl-875 deletion completed in 22.085885853s • [SLOW TEST:29.051 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:07:59.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 14:07:59.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4590' Jun 22 14:07:59.768: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 14:07:59.768: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jun 22 14:08:01.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4590' Jun 22 14:08:01.897: INFO: stderr: "" Jun 22 14:08:01.897: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:08:01.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4590" for this suite. Jun 22 14:08:21.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:08:22.047: INFO: namespace kubectl-4590 deletion completed in 20.136462967s • [SLOW TEST:22.450 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:08:22.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3651 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 22 14:08:22.133: INFO: Found 0 stateful pods, waiting for 3 Jun 22 14:08:32.145: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 14:08:32.145: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 14:08:32.145: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Jun 22 14:08:42.139: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 22 14:08:42.139: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 22 14:08:42.139: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 22 14:08:42.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3651 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 14:08:42.421: INFO: stderr: "I0622 14:08:42.297047 1837 log.go:172] (0xc000820370) (0xc0004dc6e0) Create stream\nI0622 14:08:42.297103 1837 log.go:172] (0xc000820370) (0xc0004dc6e0) Stream added, broadcasting: 1\nI0622 14:08:42.300704 1837 log.go:172] (0xc000820370) Reply frame received for 1\nI0622 14:08:42.300742 1837 log.go:172] (0xc000820370) (0xc0004dc000) Create stream\nI0622 14:08:42.300757 1837 log.go:172] (0xc000820370) (0xc0004dc000) Stream added, broadcasting: 3\nI0622 14:08:42.301970 1837 log.go:172] (0xc000820370) Reply frame received for 3\nI0622 14:08:42.302015 1837 log.go:172] (0xc000820370) (0xc0006b4140) Create stream\nI0622 14:08:42.302030 1837 log.go:172] (0xc000820370) (0xc0006b4140) Stream added, broadcasting: 5\nI0622 14:08:42.302995 1837 log.go:172] (0xc000820370) Reply frame received for 5\nI0622 14:08:42.384167 1837 log.go:172] (0xc000820370) Data frame received for 5\nI0622 14:08:42.384213 1837 log.go:172] (0xc0006b4140) (5) Data frame handling\nI0622 14:08:42.384253 1837 log.go:172] (0xc0006b4140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0622 14:08:42.411957 1837 log.go:172] (0xc000820370) Data frame received for 3\nI0622 14:08:42.412010 1837 log.go:172] (0xc0004dc000) (3) Data frame handling\nI0622 14:08:42.412039 1837 log.go:172] (0xc0004dc000) (3) Data frame sent\nI0622 14:08:42.412172 1837 log.go:172] (0xc000820370) Data frame received for 3\nI0622 14:08:42.412206 1837 log.go:172] (0xc0004dc000) (3) Data frame handling\nI0622 14:08:42.412545 1837 log.go:172] (0xc000820370) Data frame received for 5\nI0622 14:08:42.412572 1837 log.go:172] (0xc0006b4140) (5) Data frame handling\nI0622 14:08:42.414834 1837 log.go:172] (0xc000820370) Data frame received for 1\nI0622 14:08:42.414875 1837 log.go:172] (0xc0004dc6e0) (1) Data frame handling\nI0622 14:08:42.414911 1837 log.go:172] (0xc0004dc6e0) (1) Data frame sent\nI0622 14:08:42.414944 1837 log.go:172] (0xc000820370) (0xc0004dc6e0) Stream removed, broadcasting: 1\nI0622 14:08:42.414968 1837 log.go:172] (0xc000820370) Go away received\nI0622 14:08:42.415312 1837 log.go:172] (0xc000820370) (0xc0004dc6e0) Stream removed, broadcasting: 1\nI0622 14:08:42.415331 1837 log.go:172] (0xc000820370) (0xc0004dc000) Stream removed, broadcasting: 3\nI0622 14:08:42.415346 1837 log.go:172] (0xc000820370) (0xc0006b4140) Stream removed, broadcasting: 5\n" Jun 22 14:08:42.421: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 14:08:42.421: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 22 14:08:52.454: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 22 14:09:02.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3651 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 14:09:02.713: INFO: stderr: "I0622 14:09:02.614602 1858 log.go:172] (0xc00013adc0) (0xc0003286e0) Create stream\nI0622 14:09:02.614671 1858 log.go:172] (0xc00013adc0) (0xc0003286e0) Stream added, broadcasting: 1\nI0622 14:09:02.618608 1858 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0622 14:09:02.618669 1858 log.go:172] (0xc00013adc0) (0xc000608280) Create stream\nI0622 14:09:02.618698 1858 log.go:172] (0xc00013adc0) (0xc000608280) Stream added, broadcasting: 3\nI0622 14:09:02.619836 1858 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0622 14:09:02.619879 1858 log.go:172] (0xc00013adc0) (0xc000328000) Create stream\nI0622 14:09:02.619904 1858 log.go:172] (0xc00013adc0) (0xc000328000) Stream added, broadcasting: 5\nI0622 14:09:02.620936 1858 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0622 14:09:02.705025 1858 log.go:172] (0xc00013adc0) Data frame received for 3\nI0622 14:09:02.705069 1858 log.go:172] (0xc000608280) (3) Data frame handling\nI0622 14:09:02.705093 1858 log.go:172] (0xc000608280) (3) Data frame sent\nI0622 14:09:02.705106 1858 log.go:172] (0xc00013adc0) Data frame received for 3\nI0622 14:09:02.705292 1858 log.go:172] (0xc000608280) (3) Data frame handling\nI0622 14:09:02.705396 1858 log.go:172] (0xc00013adc0) Data frame received for 5\nI0622 14:09:02.705461 1858 log.go:172] (0xc000328000) (5) Data frame handling\nI0622 14:09:02.705495 1858 log.go:172] (0xc000328000) (5) Data frame sent\nI0622 14:09:02.705517 1858 log.go:172] (0xc00013adc0) Data frame received for 5\nI0622 14:09:02.705528 1858 log.go:172] (0xc000328000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0622 14:09:02.706838 1858 log.go:172] (0xc00013adc0) Data frame received for 1\nI0622 14:09:02.706876 1858 log.go:172] (0xc0003286e0) (1) Data frame handling\nI0622 14:09:02.706896 1858 log.go:172] (0xc0003286e0) (1) Data frame sent\nI0622 14:09:02.706991 1858 log.go:172] (0xc00013adc0) (0xc0003286e0) Stream removed, broadcasting: 1\nI0622 14:09:02.707037 1858 log.go:172] (0xc00013adc0) Go away received\nI0622 14:09:02.707498 1858 log.go:172] (0xc00013adc0) (0xc0003286e0) Stream removed, broadcasting: 1\nI0622 14:09:02.707520 1858 log.go:172] (0xc00013adc0) (0xc000608280) Stream removed, broadcasting: 3\nI0622 14:09:02.707531 1858 log.go:172] (0xc00013adc0) (0xc000328000) Stream removed, broadcasting: 5\n" Jun 22 14:09:02.713: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 14:09:02.713: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' STEP: Rolling back to a previous revision Jun 22 14:09:32.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3651 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 22 14:09:32.999: INFO: stderr: "I0622 14:09:32.862666 1880 log.go:172] (0xc000116dc0) (0xc0006d46e0) Create stream\nI0622 14:09:32.862709 1880 log.go:172] (0xc000116dc0) (0xc0006d46e0) Stream added, broadcasting: 1\nI0622 14:09:32.866409 1880 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0622 14:09:32.866443 1880 log.go:172] (0xc000116dc0) (0xc0006ae1e0) Create stream\nI0622 14:09:32.866455 1880 log.go:172] (0xc000116dc0) (0xc0006ae1e0) Stream added, broadcasting: 3\nI0622 14:09:32.867505 1880 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0622 14:09:32.867590 1880 log.go:172] (0xc000116dc0) (0xc0006d4000) Create stream\nI0622 14:09:32.867621 1880 log.go:172] (0xc000116dc0) (0xc0006d4000) Stream added, broadcasting: 5\nI0622 14:09:32.868548 1880 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0622 14:09:32.959376 1880 log.go:172] (0xc000116dc0) Data frame received for 5\nI0622 14:09:32.959412 1880 log.go:172] (0xc0006d4000) (5) Data frame handling\nI0622 14:09:32.959558 1880 log.go:172] (0xc0006d4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0622 14:09:32.991552 1880 log.go:172] (0xc000116dc0) Data frame received for 3\nI0622 14:09:32.991592 1880 log.go:172] (0xc0006ae1e0) (3) Data frame handling\nI0622 14:09:32.991622 1880 log.go:172] (0xc0006ae1e0) (3) Data frame sent\nI0622 14:09:32.991639 1880 log.go:172] (0xc000116dc0) Data frame received for 3\nI0622 14:09:32.991653 1880 log.go:172] (0xc0006ae1e0) (3) Data frame handling\nI0622 14:09:32.991906 1880 log.go:172] (0xc000116dc0) Data frame received for 5\nI0622 14:09:32.991924 1880 log.go:172] (0xc0006d4000) (5) Data frame handling\nI0622 14:09:32.993536 1880 log.go:172] (0xc000116dc0) Data frame received for 1\nI0622 14:09:32.993560 1880 log.go:172] (0xc0006d46e0) (1) Data frame handling\nI0622 14:09:32.993596 1880 log.go:172] (0xc0006d46e0) (1) Data frame sent\nI0622 14:09:32.993621 1880 log.go:172] (0xc000116dc0) (0xc0006d46e0) Stream removed, broadcasting: 1\nI0622 14:09:32.993870 1880 log.go:172] (0xc000116dc0) Go away received\nI0622 14:09:32.994119 1880 log.go:172] (0xc000116dc0) (0xc0006d46e0) Stream removed, broadcasting: 1\nI0622 14:09:32.994146 1880 log.go:172] (0xc000116dc0) (0xc0006ae1e0) Stream removed, broadcasting: 3\nI0622 14:09:32.994162 1880 log.go:172] (0xc000116dc0) (0xc0006d4000) Stream removed, broadcasting: 5\n" Jun 22 14:09:32.999: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 22 14:09:32.999: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 22 14:09:43.034: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 22 14:09:53.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3651 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 22 14:09:53.289: INFO: stderr: "I0622 14:09:53.191297 1901 log.go:172] (0xc000a966e0) (0xc000614a00) Create stream\nI0622 14:09:53.191374 1901 log.go:172] (0xc000a966e0) (0xc000614a00) Stream added, broadcasting: 1\nI0622 14:09:53.194842 1901 log.go:172] (0xc000a966e0) Reply frame received for 1\nI0622 14:09:53.194903 1901 log.go:172] (0xc000a966e0) (0xc000614280) Create stream\nI0622 14:09:53.194912 1901 log.go:172] (0xc000a966e0) (0xc000614280) Stream added, broadcasting: 3\nI0622 14:09:53.195765 1901 log.go:172] (0xc000a966e0) Reply frame received for 3\nI0622 14:09:53.195834 1901 log.go:172] (0xc000a966e0) (0xc0003c0000) Create stream\nI0622 14:09:53.195871 1901 log.go:172] (0xc000a966e0) (0xc0003c0000) Stream added, broadcasting: 5\nI0622 14:09:53.196710 1901 log.go:172] (0xc000a966e0) Reply frame received for 5\nI0622 14:09:53.282371 1901 log.go:172] (0xc000a966e0) Data frame received for 3\nI0622 14:09:53.282422 1901 log.go:172] (0xc000614280) (3) Data frame handling\nI0622 14:09:53.282456 1901 log.go:172] (0xc000614280) (3) Data frame sent\nI0622 14:09:53.282477 1901 log.go:172] (0xc000a966e0) Data frame received for 3\nI0622 14:09:53.282508 1901 log.go:172] (0xc000614280) (3) Data frame handling\nI0622 14:09:53.282534 1901 log.go:172] (0xc000a966e0) Data frame received for 5\nI0622 14:09:53.282546 1901 log.go:172] (0xc0003c0000) (5) Data frame handling\nI0622 14:09:53.282559 1901 log.go:172] (0xc0003c0000) (5) Data frame sent\nI0622 14:09:53.282573 1901 log.go:172] (0xc000a966e0) Data frame received for 5\nI0622 14:09:53.282587 1901 log.go:172] (0xc0003c0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0622 14:09:53.284059 1901 log.go:172] (0xc000a966e0) Data frame received for 1\nI0622 14:09:53.284087 1901 log.go:172] (0xc000614a00) (1) Data frame handling\nI0622 14:09:53.284098 1901 log.go:172] (0xc000614a00) (1) Data frame sent\nI0622 14:09:53.284114 1901 log.go:172] (0xc000a966e0) (0xc000614a00) Stream removed, broadcasting: 1\nI0622 14:09:53.284180 1901 log.go:172] (0xc000a966e0) Go away received\nI0622 14:09:53.284502 1901 log.go:172] (0xc000a966e0) (0xc000614a00) Stream removed, broadcasting: 1\nI0622 14:09:53.284531 1901 log.go:172] (0xc000a966e0) (0xc000614280) Stream removed, broadcasting: 3\nI0622 14:09:53.284553 1901 log.go:172] (0xc000a966e0) (0xc0003c0000) Stream removed, broadcasting: 5\n" Jun 22 14:09:53.290: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 22 14:09:53.290: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 22 14:10:03.340: INFO: Waiting for StatefulSet statefulset-3651/ss2 to complete update Jun 22 14:10:03.340: INFO: Waiting for Pod statefulset-3651/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 22 14:10:03.340: INFO: Waiting for Pod statefulset-3651/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 22 14:10:13.346: INFO: Waiting for StatefulSet statefulset-3651/ss2 to complete update Jun 22 14:10:13.346: INFO: Waiting for Pod statefulset-3651/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 22 14:10:23.348: INFO: Deleting all statefulset in ns statefulset-3651 Jun 22 14:10:23.351: INFO: Scaling statefulset ss2 to 0 Jun 22 14:10:43.369: INFO: Waiting for statefulset status.replicas updated to 0 Jun 22 14:10:43.372: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:10:43.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3651" for this suite. Jun 22 14:10:51.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:10:51.483: INFO: namespace statefulset-3651 deletion completed in 8.090389523s • [SLOW TEST:149.436 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:10:51.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 14:10:51.535: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:10:52.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7285" for this suite. Jun 22 14:10:58.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:10:58.796: INFO: namespace custom-resource-definition-7285 deletion completed in 6.093625648s • [SLOW TEST:7.312 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:10:58.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 22 14:10:58.909: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 22 14:11:03.914: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:11:04.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9140" for this suite. Jun 22 14:11:11.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:11:11.132: INFO: namespace replication-controller-9140 deletion completed in 6.156865178s • [SLOW TEST:12.336 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:11:11.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 22 14:11:15.984: INFO: Successfully updated pod "pod-update-9bd5de74-a035-4ce5-8033-f8a861e842a0" STEP: verifying the updated pod is in kubernetes Jun 22 14:11:15.995: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:11:15.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-673" for this suite. Jun 22 14:11:38.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:11:38.105: INFO: namespace pods-673 deletion completed in 22.107492937s • [SLOW TEST:26.973 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:11:38.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 22 14:11:38.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-597' Jun 22 14:11:38.491: INFO: stderr: "" Jun 22 14:11:38.491: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 14:11:38.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-597' Jun 22 14:11:38.580: INFO: stderr: "" Jun 22 14:11:38.581: INFO: stdout: "update-demo-nautilus-92vjm update-demo-nautilus-fzsx6 " Jun 22 14:11:38.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92vjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-597' Jun 22 14:11:38.672: INFO: stderr: "" Jun 22 14:11:38.672: INFO: stdout: "" Jun 22 14:11:38.672: INFO: update-demo-nautilus-92vjm is created but not running Jun 22 14:11:43.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-597' Jun 22 14:11:43.779: INFO: stderr: "" Jun 22 14:11:43.779: INFO: stdout: "update-demo-nautilus-92vjm update-demo-nautilus-fzsx6 " Jun 22 14:11:43.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92vjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-597' Jun 22 14:11:43.876: INFO: stderr: "" Jun 22 14:11:43.876: INFO: stdout: "true" Jun 22 14:11:43.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92vjm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-597' Jun 22 14:11:43.972: INFO: stderr: "" Jun 22 14:11:43.972: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 14:11:43.972: INFO: validating pod update-demo-nautilus-92vjm Jun 22 14:11:43.986: INFO: got data: { "image": "nautilus.jpg" } Jun 22 14:11:43.986: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 14:11:43.986: INFO: update-demo-nautilus-92vjm is verified up and running Jun 22 14:11:43.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fzsx6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-597' Jun 22 14:11:44.077: INFO: stderr: "" Jun 22 14:11:44.077: INFO: stdout: "true" Jun 22 14:11:44.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fzsx6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-597' Jun 22 14:11:44.171: INFO: stderr: "" Jun 22 14:11:44.171: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 14:11:44.171: INFO: validating pod update-demo-nautilus-fzsx6 Jun 22 14:11:44.177: INFO: got data: { "image": "nautilus.jpg" } Jun 22 14:11:44.177: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 14:11:44.177: INFO: update-demo-nautilus-fzsx6 is verified up and running STEP: using delete to clean up resources Jun 22 14:11:44.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-597' Jun 22 14:11:44.279: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 14:11:44.279: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 22 14:11:44.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-597' Jun 22 14:11:44.385: INFO: stderr: "No resources found.\n" Jun 22 14:11:44.385: INFO: stdout: "" Jun 22 14:11:44.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-597 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 14:11:44.476: INFO: stderr: "" Jun 22 14:11:44.476: INFO: stdout: "update-demo-nautilus-92vjm\nupdate-demo-nautilus-fzsx6\n" Jun 22 14:11:44.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-597' Jun 22 14:11:45.084: INFO: stderr: "No resources found.\n" Jun 22 14:11:45.084: INFO: stdout: "" Jun 22 14:11:45.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-597 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 14:11:45.177: INFO: stderr: "" Jun 22 14:11:45.177: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:11:45.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-597" for this suite. Jun 22 14:12:07.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:12:07.284: INFO: namespace kubectl-597 deletion completed in 22.103698959s • [SLOW TEST:29.179 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:12:07.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 14:12:07.379: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 22 14:12:07.389: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:07.406: INFO: Number of nodes with available pods: 0 Jun 22 14:12:07.406: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:12:08.411: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:08.415: INFO: Number of nodes with available pods: 0 Jun 22 14:12:08.415: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:12:09.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:09.744: INFO: Number of nodes with available pods: 0 Jun 22 14:12:09.744: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:12:10.411: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:10.415: INFO: Number of nodes with available pods: 0 Jun 22 14:12:10.415: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:12:11.507: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:11.529: INFO: Number of nodes with available pods: 0 Jun 22 14:12:11.529: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:12:12.445: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:12.448: INFO: Number of nodes with available pods: 2 Jun 22 14:12:12.448: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 22 14:12:12.475: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:12.475: INFO: Wrong image for pod: daemon-set-ws79f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:12.492: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:13.498: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:13.498: INFO: Wrong image for pod: daemon-set-ws79f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:13.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:14.497: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:14.497: INFO: Wrong image for pod: daemon-set-ws79f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:14.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:15.498: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:15.498: INFO: Wrong image for pod: daemon-set-ws79f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:15.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:16.524: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:16.524: INFO: Wrong image for pod: daemon-set-ws79f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:16.524: INFO: Pod daemon-set-ws79f is not available Jun 22 14:12:16.528: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:17.520: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:17.520: INFO: Wrong image for pod: daemon-set-ws79f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:17.520: INFO: Pod daemon-set-ws79f is not available Jun 22 14:12:17.524: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:18.497: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:18.497: INFO: Wrong image for pod: daemon-set-ws79f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:18.497: INFO: Pod daemon-set-ws79f is not available Jun 22 14:12:18.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:19.498: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:19.498: INFO: Wrong image for pod: daemon-set-ws79f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:19.498: INFO: Pod daemon-set-ws79f is not available Jun 22 14:12:19.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:20.497: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:20.497: INFO: Wrong image for pod: daemon-set-ws79f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:20.497: INFO: Pod daemon-set-ws79f is not available Jun 22 14:12:20.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:21.497: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:21.497: INFO: Wrong image for pod: daemon-set-ws79f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:21.497: INFO: Pod daemon-set-ws79f is not available Jun 22 14:12:21.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:22.498: INFO: Pod daemon-set-dzxr7 is not available Jun 22 14:12:22.498: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:22.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:23.498: INFO: Pod daemon-set-dzxr7 is not available Jun 22 14:12:23.498: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:23.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:24.497: INFO: Pod daemon-set-dzxr7 is not available Jun 22 14:12:24.497: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:24.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:25.497: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:25.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:26.498: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:26.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:27.496: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:27.496: INFO: Pod daemon-set-wn5nr is not available Jun 22 14:12:27.499: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:28.497: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:28.497: INFO: Pod daemon-set-wn5nr is not available Jun 22 14:12:28.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:29.498: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:29.498: INFO: Pod daemon-set-wn5nr is not available Jun 22 14:12:29.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:30.497: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:30.497: INFO: Pod daemon-set-wn5nr is not available Jun 22 14:12:30.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:31.497: INFO: Wrong image for pod: daemon-set-wn5nr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 22 14:12:31.497: INFO: Pod daemon-set-wn5nr is not available Jun 22 14:12:31.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:32.496: INFO: Pod daemon-set-nk9vq is not available Jun 22 14:12:32.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 22 14:12:32.503: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:32.506: INFO: Number of nodes with available pods: 1 Jun 22 14:12:32.506: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 14:12:33.804: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:33.808: INFO: Number of nodes with available pods: 1 Jun 22 14:12:33.808: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 14:12:34.511: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:34.515: INFO: Number of nodes with available pods: 1 Jun 22 14:12:34.515: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 14:12:35.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:35.516: INFO: Number of nodes with available pods: 1 Jun 22 14:12:35.516: INFO: Node iruya-worker2 is running more than one daemon pod Jun 22 14:12:36.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:12:36.515: INFO: Number of nodes with available pods: 2 Jun 22 14:12:36.515: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9255, will wait for the garbage collector to delete the pods Jun 22 14:12:36.590: INFO: Deleting DaemonSet.extensions daemon-set took: 6.391889ms Jun 22 14:12:36.890: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.272949ms Jun 22 14:12:42.194: INFO: Number of nodes with available pods: 0 Jun 22 14:12:42.194: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 14:12:42.197: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9255/daemonsets","resourceVersion":"17866936"},"items":null} Jun 22 14:12:42.199: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9255/pods","resourceVersion":"17866936"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:12:42.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9255" for this suite. Jun 22 14:12:48.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:12:48.379: INFO: namespace daemonsets-9255 deletion completed in 6.166669358s • [SLOW TEST:41.093 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:12:48.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-0c63f540-bdf7-4f44-bcbc-93dda19400fb STEP: Creating a pod to test consume configMaps Jun 22 14:12:48.496: INFO: Waiting up to 5m0s for pod "pod-configmaps-8656efe0-3a7b-4026-a8e9-a5df194a5c4e" in namespace "configmap-3225" to be "success or failure" Jun 22 14:12:48.511: INFO: Pod "pod-configmaps-8656efe0-3a7b-4026-a8e9-a5df194a5c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.304841ms Jun 22 14:12:50.515: INFO: Pod "pod-configmaps-8656efe0-3a7b-4026-a8e9-a5df194a5c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018481902s Jun 22 14:12:52.518: INFO: Pod "pod-configmaps-8656efe0-3a7b-4026-a8e9-a5df194a5c4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021805161s STEP: Saw pod success Jun 22 14:12:52.518: INFO: Pod "pod-configmaps-8656efe0-3a7b-4026-a8e9-a5df194a5c4e" satisfied condition "success or failure" Jun 22 14:12:52.521: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-8656efe0-3a7b-4026-a8e9-a5df194a5c4e container configmap-volume-test: STEP: delete the pod Jun 22 14:12:52.555: INFO: Waiting for pod pod-configmaps-8656efe0-3a7b-4026-a8e9-a5df194a5c4e to disappear Jun 22 14:12:52.565: INFO: Pod pod-configmaps-8656efe0-3a7b-4026-a8e9-a5df194a5c4e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:12:52.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3225" for this suite. Jun 22 14:12:58.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:12:58.708: INFO: namespace configmap-3225 deletion completed in 6.140349341s • [SLOW TEST:10.329 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:12:58.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 14:12:58.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26a9bdd1-f7eb-49f0-a8b6-1055f31d0d29" in namespace "projected-3710" to be "success or failure" Jun 22 14:12:58.818: INFO: Pod "downwardapi-volume-26a9bdd1-f7eb-49f0-a8b6-1055f31d0d29": Phase="Pending", Reason="", readiness=false. Elapsed: 59.009455ms Jun 22 14:13:00.823: INFO: Pod "downwardapi-volume-26a9bdd1-f7eb-49f0-a8b6-1055f31d0d29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063391965s Jun 22 14:13:02.827: INFO: Pod "downwardapi-volume-26a9bdd1-f7eb-49f0-a8b6-1055f31d0d29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06747835s STEP: Saw pod success Jun 22 14:13:02.827: INFO: Pod "downwardapi-volume-26a9bdd1-f7eb-49f0-a8b6-1055f31d0d29" satisfied condition "success or failure" Jun 22 14:13:02.830: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-26a9bdd1-f7eb-49f0-a8b6-1055f31d0d29 container client-container: STEP: delete the pod Jun 22 14:13:02.855: INFO: Waiting for pod downwardapi-volume-26a9bdd1-f7eb-49f0-a8b6-1055f31d0d29 to disappear Jun 22 14:13:02.865: INFO: Pod downwardapi-volume-26a9bdd1-f7eb-49f0-a8b6-1055f31d0d29 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:13:02.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3710" for this suite. Jun 22 14:13:09.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:13:09.091: INFO: namespace projected-3710 deletion completed in 6.222832771s • [SLOW TEST:10.381 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:13:09.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 22 14:13:13.718: INFO: Successfully updated pod "labelsupdate3b559a22-d85e-47b2-8201-e1766b841f89" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:13:15.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8017" for this suite. Jun 22 14:13:37.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:13:37.829: INFO: namespace projected-8017 deletion completed in 22.089003291s • [SLOW TEST:28.738 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:13:37.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 22 14:13:37.942: INFO: Waiting up to 5m0s for pod "pod-c709bedd-3f6f-4893-ada7-3cfc0e969d3a" in namespace "emptydir-8540" to be "success or failure" Jun 22 14:13:37.960: INFO: Pod "pod-c709bedd-3f6f-4893-ada7-3cfc0e969d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.304804ms Jun 22 14:13:40.004: INFO: Pod "pod-c709bedd-3f6f-4893-ada7-3cfc0e969d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061963318s Jun 22 14:13:42.009: INFO: Pod "pod-c709bedd-3f6f-4893-ada7-3cfc0e969d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067067569s STEP: Saw pod success Jun 22 14:13:42.009: INFO: Pod "pod-c709bedd-3f6f-4893-ada7-3cfc0e969d3a" satisfied condition "success or failure" Jun 22 14:13:42.012: INFO: Trying to get logs from node iruya-worker2 pod pod-c709bedd-3f6f-4893-ada7-3cfc0e969d3a container test-container: STEP: delete the pod Jun 22 14:13:42.204: INFO: Waiting for pod pod-c709bedd-3f6f-4893-ada7-3cfc0e969d3a to disappear Jun 22 14:13:42.237: INFO: Pod pod-c709bedd-3f6f-4893-ada7-3cfc0e969d3a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:13:42.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8540" for this suite. Jun 22 14:13:48.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:13:48.351: INFO: namespace emptydir-8540 deletion completed in 6.109766037s • [SLOW TEST:10.521 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:13:48.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-eb41bca1-b79e-4b1d-9caa-8c2b659ed98d STEP: Creating a pod to test consume configMaps Jun 22 14:13:48.468: INFO: Waiting up to 5m0s for pod "pod-configmaps-d28291f0-cf18-48b9-bd23-36a961e4c80e" in namespace "configmap-8228" to be "success or failure" Jun 22 14:13:48.471: INFO: Pod "pod-configmaps-d28291f0-cf18-48b9-bd23-36a961e4c80e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.652775ms Jun 22 14:13:50.475: INFO: Pod "pod-configmaps-d28291f0-cf18-48b9-bd23-36a961e4c80e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007054574s Jun 22 14:13:52.478: INFO: Pod "pod-configmaps-d28291f0-cf18-48b9-bd23-36a961e4c80e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010823688s STEP: Saw pod success Jun 22 14:13:52.478: INFO: Pod "pod-configmaps-d28291f0-cf18-48b9-bd23-36a961e4c80e" satisfied condition "success or failure" Jun 22 14:13:52.481: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d28291f0-cf18-48b9-bd23-36a961e4c80e container configmap-volume-test: STEP: delete the pod Jun 22 14:13:52.497: INFO: Waiting for pod pod-configmaps-d28291f0-cf18-48b9-bd23-36a961e4c80e to disappear Jun 22 14:13:52.502: INFO: Pod pod-configmaps-d28291f0-cf18-48b9-bd23-36a961e4c80e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:13:52.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8228" for this suite. Jun 22 14:13:58.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:13:58.602: INFO: namespace configmap-8228 deletion completed in 6.096662913s • [SLOW TEST:10.251 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:13:58.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 22 14:14:01.853: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:14:01.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2224" for this suite. Jun 22 14:14:07.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:14:08.000: INFO: namespace container-runtime-2224 deletion completed in 6.127461894s • [SLOW TEST:9.398 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:14:08.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 22 14:14:08.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1594' Jun 22 14:14:08.293: INFO: stderr: "" Jun 22 14:14:08.293: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 14:14:08.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1594' Jun 22 14:14:08.421: INFO: stderr: "" Jun 22 14:14:08.421: INFO: stdout: "update-demo-nautilus-lv8z4 update-demo-nautilus-vmjcd " Jun 22 14:14:08.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lv8z4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:08.517: INFO: stderr: "" Jun 22 14:14:08.517: INFO: stdout: "" Jun 22 14:14:08.517: INFO: update-demo-nautilus-lv8z4 is created but not running Jun 22 14:14:13.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1594' Jun 22 14:14:13.612: INFO: stderr: "" Jun 22 14:14:13.612: INFO: stdout: "update-demo-nautilus-lv8z4 update-demo-nautilus-vmjcd " Jun 22 14:14:13.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lv8z4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:13.707: INFO: stderr: "" Jun 22 14:14:13.707: INFO: stdout: "true" Jun 22 14:14:13.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lv8z4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:13.805: INFO: stderr: "" Jun 22 14:14:13.805: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 14:14:13.805: INFO: validating pod update-demo-nautilus-lv8z4 Jun 22 14:14:13.809: INFO: got data: { "image": "nautilus.jpg" } Jun 22 14:14:13.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 14:14:13.809: INFO: update-demo-nautilus-lv8z4 is verified up and running Jun 22 14:14:13.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmjcd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:13.909: INFO: stderr: "" Jun 22 14:14:13.909: INFO: stdout: "true" Jun 22 14:14:13.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmjcd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:14.008: INFO: stderr: "" Jun 22 14:14:14.008: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 14:14:14.008: INFO: validating pod update-demo-nautilus-vmjcd Jun 22 14:14:14.012: INFO: got data: { "image": "nautilus.jpg" } Jun 22 14:14:14.012: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 14:14:14.012: INFO: update-demo-nautilus-vmjcd is verified up and running STEP: scaling down the replication controller Jun 22 14:14:14.015: INFO: scanned /root for discovery docs: Jun 22 14:14:14.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1594' Jun 22 14:14:15.187: INFO: stderr: "" Jun 22 14:14:15.187: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 14:14:15.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1594' Jun 22 14:14:15.284: INFO: stderr: "" Jun 22 14:14:15.284: INFO: stdout: "update-demo-nautilus-lv8z4 update-demo-nautilus-vmjcd " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 22 14:14:20.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1594' Jun 22 14:14:20.391: INFO: stderr: "" Jun 22 14:14:20.392: INFO: stdout: "update-demo-nautilus-lv8z4 update-demo-nautilus-vmjcd " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 22 14:14:25.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1594' Jun 22 14:14:25.488: INFO: stderr: "" Jun 22 14:14:25.488: INFO: stdout: "update-demo-nautilus-vmjcd " Jun 22 14:14:25.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmjcd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:25.584: INFO: stderr: "" Jun 22 14:14:25.584: INFO: stdout: "true" Jun 22 14:14:25.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmjcd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:25.693: INFO: stderr: "" Jun 22 14:14:25.693: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 14:14:25.693: INFO: validating pod update-demo-nautilus-vmjcd Jun 22 14:14:25.697: INFO: got data: { "image": "nautilus.jpg" } Jun 22 14:14:25.697: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 14:14:25.697: INFO: update-demo-nautilus-vmjcd is verified up and running STEP: scaling up the replication controller Jun 22 14:14:25.699: INFO: scanned /root for discovery docs: Jun 22 14:14:25.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1594' Jun 22 14:14:26.849: INFO: stderr: "" Jun 22 14:14:26.850: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 22 14:14:26.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1594' Jun 22 14:14:26.953: INFO: stderr: "" Jun 22 14:14:26.953: INFO: stdout: "update-demo-nautilus-vmjcd update-demo-nautilus-wbn7m " Jun 22 14:14:26.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmjcd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:27.039: INFO: stderr: "" Jun 22 14:14:27.039: INFO: stdout: "true" Jun 22 14:14:27.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmjcd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:27.130: INFO: stderr: "" Jun 22 14:14:27.130: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 14:14:27.130: INFO: validating pod update-demo-nautilus-vmjcd Jun 22 14:14:27.134: INFO: got data: { "image": "nautilus.jpg" } Jun 22 14:14:27.134: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 14:14:27.134: INFO: update-demo-nautilus-vmjcd is verified up and running Jun 22 14:14:27.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wbn7m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:27.243: INFO: stderr: "" Jun 22 14:14:27.243: INFO: stdout: "" Jun 22 14:14:27.243: INFO: update-demo-nautilus-wbn7m is created but not running Jun 22 14:14:32.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1594' Jun 22 14:14:32.353: INFO: stderr: "" Jun 22 14:14:32.353: INFO: stdout: "update-demo-nautilus-vmjcd update-demo-nautilus-wbn7m " Jun 22 14:14:32.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmjcd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:32.459: INFO: stderr: "" Jun 22 14:14:32.459: INFO: stdout: "true" Jun 22 14:14:32.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vmjcd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:32.553: INFO: stderr: "" Jun 22 14:14:32.553: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 14:14:32.553: INFO: validating pod update-demo-nautilus-vmjcd Jun 22 14:14:32.557: INFO: got data: { "image": "nautilus.jpg" } Jun 22 14:14:32.557: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 14:14:32.557: INFO: update-demo-nautilus-vmjcd is verified up and running Jun 22 14:14:32.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wbn7m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:32.648: INFO: stderr: "" Jun 22 14:14:32.648: INFO: stdout: "true" Jun 22 14:14:32.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wbn7m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1594' Jun 22 14:14:32.742: INFO: stderr: "" Jun 22 14:14:32.742: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 22 14:14:32.742: INFO: validating pod update-demo-nautilus-wbn7m Jun 22 14:14:32.746: INFO: got data: { "image": "nautilus.jpg" } Jun 22 14:14:32.746: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 22 14:14:32.746: INFO: update-demo-nautilus-wbn7m is verified up and running STEP: using delete to clean up resources Jun 22 14:14:32.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1594' Jun 22 14:14:32.840: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 22 14:14:32.840: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 22 14:14:32.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1594' Jun 22 14:14:32.939: INFO: stderr: "No resources found.\n" Jun 22 14:14:32.939: INFO: stdout: "" Jun 22 14:14:32.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1594 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 14:14:33.047: INFO: stderr: "" Jun 22 14:14:33.047: INFO: stdout: "update-demo-nautilus-vmjcd\nupdate-demo-nautilus-wbn7m\n" Jun 22 14:14:33.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1594' Jun 22 14:14:33.675: INFO: stderr: "No resources found.\n" Jun 22 14:14:33.675: INFO: stdout: "" Jun 22 14:14:33.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1594 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 22 14:14:33.777: INFO: stderr: "" Jun 22 14:14:33.777: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:14:33.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1594" for this suite. Jun 22 14:14:55.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:14:55.904: INFO: namespace kubectl-1594 deletion completed in 22.122795886s • [SLOW TEST:47.904 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:14:55.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jun 22 14:14:56.531: INFO: created pod pod-service-account-defaultsa Jun 22 14:14:56.531: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 22 14:14:56.543: INFO: created pod pod-service-account-mountsa Jun 22 14:14:56.543: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 22 14:14:56.554: INFO: created pod pod-service-account-nomountsa Jun 22 14:14:56.554: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 22 14:14:56.612: INFO: created pod pod-service-account-defaultsa-mountspec Jun 22 14:14:56.612: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 22 14:14:56.638: INFO: created pod pod-service-account-mountsa-mountspec Jun 22 14:14:56.638: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 22 14:14:56.673: INFO: created pod pod-service-account-nomountsa-mountspec Jun 22 14:14:56.674: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 22 14:14:56.695: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 22 14:14:56.695: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 22 14:14:56.778: INFO: created pod pod-service-account-mountsa-nomountspec Jun 22 14:14:56.778: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 22 14:14:56.793: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 22 14:14:56.793: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:14:56.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6495" for this suite. Jun 22 14:15:24.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:15:25.016: INFO: namespace svcaccounts-6495 deletion completed in 28.146653426s • [SLOW TEST:29.112 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:15:25.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 22 14:15:25.132: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8731,SelfLink:/api/v1/namespaces/watch-8731/configmaps/e2e-watch-test-resource-version,UID:618cef61-17b4-4ccf-9550-21ecef0bea6b,ResourceVersion:17867617,Generation:0,CreationTimestamp:2020-06-22 14:15:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 14:15:25.132: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8731,SelfLink:/api/v1/namespaces/watch-8731/configmaps/e2e-watch-test-resource-version,UID:618cef61-17b4-4ccf-9550-21ecef0bea6b,ResourceVersion:17867618,Generation:0,CreationTimestamp:2020-06-22 14:15:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:15:25.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8731" for this suite. Jun 22 14:15:31.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:15:31.256: INFO: namespace watch-8731 deletion completed in 6.087605072s • [SLOW TEST:6.239 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:15:31.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jun 22 14:15:35.875: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5657 pod-service-account-275d2fe6-c331-4ec5-86c5-696dcff84ad3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 22 14:15:36.122: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5657 pod-service-account-275d2fe6-c331-4ec5-86c5-696dcff84ad3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 22 14:15:36.351: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5657 pod-service-account-275d2fe6-c331-4ec5-86c5-696dcff84ad3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:15:36.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5657" for this suite. Jun 22 14:15:42.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:15:42.643: INFO: namespace svcaccounts-5657 deletion completed in 6.082474619s • [SLOW TEST:11.387 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:15:42.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-knrk STEP: Creating a pod to test atomic-volume-subpath Jun 22 14:15:42.739: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-knrk" in namespace "subpath-9662" to be "success or failure" Jun 22 14:15:42.768: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Pending", Reason="", readiness=false. Elapsed: 28.701461ms Jun 22 14:15:44.947: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207365681s Jun 22 14:15:46.951: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Running", Reason="", readiness=true. Elapsed: 4.212020757s Jun 22 14:15:48.956: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Running", Reason="", readiness=true. Elapsed: 6.216721745s Jun 22 14:15:50.961: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Running", Reason="", readiness=true. Elapsed: 8.221418604s Jun 22 14:15:52.965: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Running", Reason="", readiness=true. Elapsed: 10.225990998s Jun 22 14:15:54.969: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Running", Reason="", readiness=true. Elapsed: 12.230067204s Jun 22 14:15:56.974: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Running", Reason="", readiness=true. Elapsed: 14.234678442s Jun 22 14:15:58.979: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Running", Reason="", readiness=true. Elapsed: 16.239527748s Jun 22 14:16:00.983: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Running", Reason="", readiness=true. Elapsed: 18.243769844s Jun 22 14:16:02.987: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Running", Reason="", readiness=true. Elapsed: 20.248062344s Jun 22 14:16:04.992: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Running", Reason="", readiness=true. Elapsed: 22.252989611s Jun 22 14:16:06.996: INFO: Pod "pod-subpath-test-secret-knrk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.256758584s STEP: Saw pod success Jun 22 14:16:06.996: INFO: Pod "pod-subpath-test-secret-knrk" satisfied condition "success or failure" Jun 22 14:16:06.998: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-knrk container test-container-subpath-secret-knrk: STEP: delete the pod Jun 22 14:16:07.017: INFO: Waiting for pod pod-subpath-test-secret-knrk to disappear Jun 22 14:16:07.022: INFO: Pod pod-subpath-test-secret-knrk no longer exists STEP: Deleting pod pod-subpath-test-secret-knrk Jun 22 14:16:07.022: INFO: Deleting pod "pod-subpath-test-secret-knrk" in namespace "subpath-9662" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:16:07.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9662" for this suite. Jun 22 14:16:13.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:16:13.127: INFO: namespace subpath-9662 deletion completed in 6.099447177s • [SLOW TEST:30.483 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:16:13.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-e785a5b0-da54-4862-935a-1da380fc52bd STEP: Creating the pod STEP: Updating configmap configmap-test-upd-e785a5b0-da54-4862-935a-1da380fc52bd STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:16:19.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7709" for this suite. Jun 22 14:16:41.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:16:41.353: INFO: namespace configmap-7709 deletion completed in 22.098558663s • [SLOW TEST:28.226 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:16:41.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 22 14:16:41.411: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:16:52.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1025" for this suite. Jun 22 14:16:58.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:16:58.292: INFO: namespace pods-1025 deletion completed in 6.103107121s • [SLOW TEST:16.938 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:16:58.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 14:17:02.459: INFO: Waiting up to 5m0s for pod "client-envvars-2d10ea72-4618-42e2-8a19-6dd055baf82a" in namespace "pods-7672" to be "success or failure" Jun 22 14:17:02.470: INFO: Pod "client-envvars-2d10ea72-4618-42e2-8a19-6dd055baf82a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.500369ms Jun 22 14:17:04.474: INFO: Pod "client-envvars-2d10ea72-4618-42e2-8a19-6dd055baf82a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015196364s Jun 22 14:17:06.486: INFO: Pod "client-envvars-2d10ea72-4618-42e2-8a19-6dd055baf82a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027553875s STEP: Saw pod success Jun 22 14:17:06.486: INFO: Pod "client-envvars-2d10ea72-4618-42e2-8a19-6dd055baf82a" satisfied condition "success or failure" Jun 22 14:17:06.490: INFO: Trying to get logs from node iruya-worker pod client-envvars-2d10ea72-4618-42e2-8a19-6dd055baf82a container env3cont: STEP: delete the pod Jun 22 14:17:06.511: INFO: Waiting for pod client-envvars-2d10ea72-4618-42e2-8a19-6dd055baf82a to disappear Jun 22 14:17:06.514: INFO: Pod client-envvars-2d10ea72-4618-42e2-8a19-6dd055baf82a no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:17:06.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7672" for this suite. Jun 22 14:17:46.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:17:46.607: INFO: namespace pods-7672 deletion completed in 40.090050159s • [SLOW TEST:48.315 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:17:46.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 14:17:46.668: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.371542ms) Jun 22 14:17:46.671: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.969864ms) Jun 22 14:17:46.675: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.050735ms) Jun 22 14:17:46.678: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.317353ms) Jun 22 14:17:46.681: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.050148ms) Jun 22 14:17:46.684: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.955527ms) Jun 22 14:17:46.688: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.528188ms) Jun 22 14:17:46.690: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.656437ms) Jun 22 14:17:46.715: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 25.067345ms) Jun 22 14:17:46.720: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.142224ms) Jun 22 14:17:46.723: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.661977ms) Jun 22 14:17:46.727: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.498459ms) Jun 22 14:17:46.730: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.596909ms) Jun 22 14:17:46.734: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.348284ms) Jun 22 14:17:46.738: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.887997ms) Jun 22 14:17:46.741: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.579674ms) Jun 22 14:17:46.744: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.059805ms) Jun 22 14:17:46.748: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.506057ms) Jun 22 14:17:46.751: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.307689ms) Jun 22 14:17:46.755: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.255169ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:17:46.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5650" for this suite. Jun 22 14:17:52.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:17:52.854: INFO: namespace proxy-5650 deletion completed in 6.096018165s • [SLOW TEST:6.246 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:17:52.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-9836 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9836 to expose endpoints map[] Jun 22 14:17:52.934: INFO: Get endpoints failed (12.14356ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 22 14:17:53.938: INFO: successfully validated that service endpoint-test2 in namespace services-9836 exposes endpoints map[] (1.015373222s elapsed) STEP: Creating pod pod1 in namespace services-9836 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9836 to expose endpoints map[pod1:[80]] Jun 22 14:17:57.018: INFO: successfully validated that service endpoint-test2 in namespace services-9836 exposes endpoints map[pod1:[80]] (3.073798956s elapsed) STEP: Creating pod pod2 in namespace services-9836 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9836 to expose endpoints map[pod1:[80] pod2:[80]] Jun 22 14:18:01.128: INFO: successfully validated that service endpoint-test2 in namespace services-9836 exposes endpoints map[pod1:[80] pod2:[80]] (4.077723365s elapsed) STEP: Deleting pod pod1 in namespace services-9836 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9836 to expose endpoints map[pod2:[80]] Jun 22 14:18:02.190: INFO: successfully validated that service endpoint-test2 in namespace services-9836 exposes endpoints map[pod2:[80]] (1.056661541s elapsed) STEP: Deleting pod pod2 in namespace services-9836 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9836 to expose endpoints map[] Jun 22 14:18:03.218: INFO: successfully validated that service endpoint-test2 in namespace services-9836 exposes endpoints map[] (1.022622284s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:18:03.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9836" for this suite. Jun 22 14:18:21.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:18:21.376: INFO: namespace services-9836 deletion completed in 18.088500272s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:28.521 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:18:21.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 22 14:18:21.455: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6853,SelfLink:/api/v1/namespaces/watch-6853/configmaps/e2e-watch-test-watch-closed,UID:e0218ad8-0d9e-427a-b524-fafc7db31083,ResourceVersion:17868190,Generation:0,CreationTimestamp:2020-06-22 14:18:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 14:18:21.455: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6853,SelfLink:/api/v1/namespaces/watch-6853/configmaps/e2e-watch-test-watch-closed,UID:e0218ad8-0d9e-427a-b524-fafc7db31083,ResourceVersion:17868191,Generation:0,CreationTimestamp:2020-06-22 14:18:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 22 14:18:21.541: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6853,SelfLink:/api/v1/namespaces/watch-6853/configmaps/e2e-watch-test-watch-closed,UID:e0218ad8-0d9e-427a-b524-fafc7db31083,ResourceVersion:17868192,Generation:0,CreationTimestamp:2020-06-22 14:18:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 14:18:21.542: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6853,SelfLink:/api/v1/namespaces/watch-6853/configmaps/e2e-watch-test-watch-closed,UID:e0218ad8-0d9e-427a-b524-fafc7db31083,ResourceVersion:17868193,Generation:0,CreationTimestamp:2020-06-22 14:18:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:18:21.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6853" for this suite. Jun 22 14:18:27.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:18:27.629: INFO: namespace watch-6853 deletion completed in 6.077595989s • [SLOW TEST:6.253 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:18:27.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jun 22 14:18:27.685: INFO: Waiting up to 5m0s for pod "var-expansion-7f0e8d9f-2c5f-4a39-b415-ec13458f8bb5" in namespace "var-expansion-7366" to be "success or failure" Jun 22 14:18:27.690: INFO: Pod "var-expansion-7f0e8d9f-2c5f-4a39-b415-ec13458f8bb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449328ms Jun 22 14:18:29.694: INFO: Pod "var-expansion-7f0e8d9f-2c5f-4a39-b415-ec13458f8bb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008388036s Jun 22 14:18:31.699: INFO: Pod "var-expansion-7f0e8d9f-2c5f-4a39-b415-ec13458f8bb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013214782s STEP: Saw pod success Jun 22 14:18:31.699: INFO: Pod "var-expansion-7f0e8d9f-2c5f-4a39-b415-ec13458f8bb5" satisfied condition "success or failure" Jun 22 14:18:31.702: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-7f0e8d9f-2c5f-4a39-b415-ec13458f8bb5 container dapi-container: STEP: delete the pod Jun 22 14:18:31.739: INFO: Waiting for pod var-expansion-7f0e8d9f-2c5f-4a39-b415-ec13458f8bb5 to disappear Jun 22 14:18:31.743: INFO: Pod var-expansion-7f0e8d9f-2c5f-4a39-b415-ec13458f8bb5 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:18:31.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7366" for this suite. Jun 22 14:18:37.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:18:37.917: INFO: namespace var-expansion-7366 deletion completed in 6.170633678s • [SLOW TEST:10.288 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:18:37.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:18:43.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4350" for this suite. Jun 22 14:18:49.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:18:49.706: INFO: namespace watch-4350 deletion completed in 6.182040546s • [SLOW TEST:11.789 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:18:49.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0622 14:18:59.808024 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 14:18:59.808: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:18:59.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7460" for this suite. Jun 22 14:19:05.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:19:05.898: INFO: namespace gc-7460 deletion completed in 6.086232728s • [SLOW TEST:16.192 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:19:05.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jun 22 14:19:10.004: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 22 14:19:15.114: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:19:15.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7718" for this suite. Jun 22 14:19:21.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:19:21.277: INFO: namespace pods-7718 deletion completed in 6.154359116s • [SLOW TEST:15.378 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:19:21.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-793c958a-3571-438d-990b-98a1b65cf4d2 STEP: Creating a pod to test consume configMaps Jun 22 14:19:21.351: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-21599c04-aaa6-4b75-be2b-745b5e0fac45" in namespace "projected-7432" to be "success or failure" Jun 22 14:19:21.370: INFO: Pod "pod-projected-configmaps-21599c04-aaa6-4b75-be2b-745b5e0fac45": Phase="Pending", Reason="", readiness=false. Elapsed: 18.930524ms Jun 22 14:19:23.377: INFO: Pod "pod-projected-configmaps-21599c04-aaa6-4b75-be2b-745b5e0fac45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0259533s Jun 22 14:19:25.381: INFO: Pod "pod-projected-configmaps-21599c04-aaa6-4b75-be2b-745b5e0fac45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030069412s STEP: Saw pod success Jun 22 14:19:25.381: INFO: Pod "pod-projected-configmaps-21599c04-aaa6-4b75-be2b-745b5e0fac45" satisfied condition "success or failure" Jun 22 14:19:25.383: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-21599c04-aaa6-4b75-be2b-745b5e0fac45 container projected-configmap-volume-test: STEP: delete the pod Jun 22 14:19:25.420: INFO: Waiting for pod pod-projected-configmaps-21599c04-aaa6-4b75-be2b-745b5e0fac45 to disappear Jun 22 14:19:25.432: INFO: Pod pod-projected-configmaps-21599c04-aaa6-4b75-be2b-745b5e0fac45 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:19:25.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7432" for this suite. Jun 22 14:19:31.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:19:31.553: INFO: namespace projected-7432 deletion completed in 6.118061214s • [SLOW TEST:10.276 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:19:31.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 22 14:19:31.599: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 22 14:19:31.616: INFO: Waiting for terminating namespaces to be deleted... Jun 22 14:19:31.618: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 22 14:19:31.624: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 22 14:19:31.624: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 14:19:31.624: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 22 14:19:31.624: INFO: Container kindnet-cni ready: true, restart count 2 Jun 22 14:19:31.624: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 22 14:19:31.630: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 22 14:19:31.630: INFO: Container coredns ready: true, restart count 0 Jun 22 14:19:31.630: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 22 14:19:31.630: INFO: Container coredns ready: true, restart count 0 Jun 22 14:19:31.630: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 22 14:19:31.630: INFO: Container kube-proxy ready: true, restart count 0 Jun 22 14:19:31.630: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 22 14:19:31.630: INFO: Container kindnet-cni ready: true, restart count 2 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Jun 22 14:19:31.706: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Jun 22 14:19:31.706: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Jun 22 14:19:31.706: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Jun 22 14:19:31.706: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Jun 22 14:19:31.706: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Jun 22 14:19:31.706: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-71a28f4d-da18-43de-a163-107eccc9851d.161ae3836823a54e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-142/filler-pod-71a28f4d-da18-43de-a163-107eccc9851d to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-71a28f4d-da18-43de-a163-107eccc9851d.161ae383ea41ca30], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-71a28f4d-da18-43de-a163-107eccc9851d.161ae3843c03b492], Reason = [Created], Message = [Created container filler-pod-71a28f4d-da18-43de-a163-107eccc9851d] STEP: Considering event: Type = [Normal], Name = [filler-pod-71a28f4d-da18-43de-a163-107eccc9851d.161ae3844b936bf7], Reason = [Started], Message = [Started container filler-pod-71a28f4d-da18-43de-a163-107eccc9851d] STEP: Considering event: Type = [Normal], Name = [filler-pod-98f5a694-0773-451c-ba93-82558bd9de0e.161ae383671ec39d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-142/filler-pod-98f5a694-0773-451c-ba93-82558bd9de0e to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-98f5a694-0773-451c-ba93-82558bd9de0e.161ae383b27f603d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-98f5a694-0773-451c-ba93-82558bd9de0e.161ae384347e8234], Reason = [Created], Message = [Created container filler-pod-98f5a694-0773-451c-ba93-82558bd9de0e] STEP: Considering event: Type = [Normal], Name = [filler-pod-98f5a694-0773-451c-ba93-82558bd9de0e.161ae38443313ad3], Reason = [Started], Message = [Started container filler-pod-98f5a694-0773-451c-ba93-82558bd9de0e] STEP: Considering event: Type = [Warning], Name = [additional-pod.161ae384ceb29d27], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:19:38.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-142" for this suite. Jun 22 14:19:44.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:19:44.933: INFO: namespace sched-pred-142 deletion completed in 6.079070148s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.379 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:19:44.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 14:19:45.105: INFO: Creating deployment "test-recreate-deployment" Jun 22 14:19:45.140: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 22 14:19:45.178: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 22 14:19:47.187: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 22 14:19:47.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432385, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432385, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432385, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432385, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 14:19:49.195: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 22 14:19:49.202: INFO: Updating deployment test-recreate-deployment Jun 22 14:19:49.202: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 22 14:19:49.703: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-1508,SelfLink:/apis/apps/v1/namespaces/deployment-1508/deployments/test-recreate-deployment,UID:6ee9c6a5-c0e2-4b92-8133-b3246432460b,ResourceVersion:17868704,Generation:2,CreationTimestamp:2020-06-22 14:19:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-06-22 14:19:49 +0000 UTC 2020-06-22 14:19:49 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-22 14:19:49 +0000 UTC 2020-06-22 14:19:45 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jun 22 14:19:49.760: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-1508,SelfLink:/apis/apps/v1/namespaces/deployment-1508/replicasets/test-recreate-deployment-5c8c9cc69d,UID:93edadd3-6d7f-483f-9400-b9c045535df1,ResourceVersion:17868700,Generation:1,CreationTimestamp:2020-06-22 14:19:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6ee9c6a5-c0e2-4b92-8133-b3246432460b 0xc002866f57 0xc002866f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 14:19:49.760: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 22 14:19:49.760: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-1508,SelfLink:/apis/apps/v1/namespaces/deployment-1508/replicasets/test-recreate-deployment-6df85df6b9,UID:855f8894-33e7-41ad-b6a6-f7495339227d,ResourceVersion:17868692,Generation:2,CreationTimestamp:2020-06-22 14:19:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6ee9c6a5-c0e2-4b92-8133-b3246432460b 0xc002867027 0xc002867028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 14:19:49.774: INFO: Pod "test-recreate-deployment-5c8c9cc69d-h4tkm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-h4tkm,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-1508,SelfLink:/api/v1/namespaces/deployment-1508/pods/test-recreate-deployment-5c8c9cc69d-h4tkm,UID:a471bd5b-31e7-4464-9d74-b9cd7892ff62,ResourceVersion:17868703,Generation:0,CreationTimestamp:2020-06-22 14:19:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 93edadd3-6d7f-483f-9400-b9c045535df1 0xc0036e26f7 0xc0036e26f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bvr9q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bvr9q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bvr9q true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0036e2770} {node.kubernetes.io/unreachable Exists NoExecute 0xc0036e2790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:19:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:19:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:19:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:19:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:19:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:19:49.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1508" for this suite. Jun 22 14:19:56.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:19:56.168: INFO: namespace deployment-1508 deletion completed in 6.390444167s • [SLOW TEST:11.235 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:19:56.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 22 14:19:56.243: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:20:03.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5737" for this suite. Jun 22 14:20:09.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:20:09.740: INFO: namespace init-container-5737 deletion completed in 6.09374615s • [SLOW TEST:13.571 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:20:09.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-r4ds STEP: Creating a pod to test atomic-volume-subpath Jun 22 14:20:09.881: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-r4ds" in namespace "subpath-1902" to be "success or failure" Jun 22 14:20:09.916: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Pending", Reason="", readiness=false. Elapsed: 35.117704ms Jun 22 14:20:11.919: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038226545s Jun 22 14:20:13.924: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Running", Reason="", readiness=true. Elapsed: 4.042753414s Jun 22 14:20:15.927: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Running", Reason="", readiness=true. Elapsed: 6.046432112s Jun 22 14:20:17.932: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Running", Reason="", readiness=true. Elapsed: 8.050942764s Jun 22 14:20:19.937: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Running", Reason="", readiness=true. Elapsed: 10.056090079s Jun 22 14:20:21.942: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Running", Reason="", readiness=true. Elapsed: 12.061182226s Jun 22 14:20:23.947: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Running", Reason="", readiness=true. Elapsed: 14.065707326s Jun 22 14:20:25.951: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Running", Reason="", readiness=true. Elapsed: 16.069785799s Jun 22 14:20:27.955: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Running", Reason="", readiness=true. Elapsed: 18.074392229s Jun 22 14:20:29.960: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Running", Reason="", readiness=true. Elapsed: 20.078910361s Jun 22 14:20:31.964: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Running", Reason="", readiness=true. Elapsed: 22.082679836s Jun 22 14:20:33.968: INFO: Pod "pod-subpath-test-downwardapi-r4ds": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.087212369s STEP: Saw pod success Jun 22 14:20:33.968: INFO: Pod "pod-subpath-test-downwardapi-r4ds" satisfied condition "success or failure" Jun 22 14:20:33.972: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-r4ds container test-container-subpath-downwardapi-r4ds: STEP: delete the pod Jun 22 14:20:34.069: INFO: Waiting for pod pod-subpath-test-downwardapi-r4ds to disappear Jun 22 14:20:34.099: INFO: Pod pod-subpath-test-downwardapi-r4ds no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-r4ds Jun 22 14:20:34.099: INFO: Deleting pod "pod-subpath-test-downwardapi-r4ds" in namespace "subpath-1902" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:20:34.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1902" for this suite. Jun 22 14:20:40.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:20:40.261: INFO: namespace subpath-1902 deletion completed in 6.155751764s • [SLOW TEST:30.521 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:20:40.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 14:20:40.335: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2449559d-c67d-4c1b-b515-77e7d58e9fe5" in namespace "downward-api-1646" to be "success or failure" Jun 22 14:20:40.340: INFO: Pod "downwardapi-volume-2449559d-c67d-4c1b-b515-77e7d58e9fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.529612ms Jun 22 14:20:42.448: INFO: Pod "downwardapi-volume-2449559d-c67d-4c1b-b515-77e7d58e9fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113041341s Jun 22 14:20:44.453: INFO: Pod "downwardapi-volume-2449559d-c67d-4c1b-b515-77e7d58e9fe5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117388382s STEP: Saw pod success Jun 22 14:20:44.453: INFO: Pod "downwardapi-volume-2449559d-c67d-4c1b-b515-77e7d58e9fe5" satisfied condition "success or failure" Jun 22 14:20:44.456: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2449559d-c67d-4c1b-b515-77e7d58e9fe5 container client-container: STEP: delete the pod Jun 22 14:20:44.493: INFO: Waiting for pod downwardapi-volume-2449559d-c67d-4c1b-b515-77e7d58e9fe5 to disappear Jun 22 14:20:44.500: INFO: Pod downwardapi-volume-2449559d-c67d-4c1b-b515-77e7d58e9fe5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:20:44.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1646" for this suite. Jun 22 14:20:50.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:20:50.632: INFO: namespace downward-api-1646 deletion completed in 6.128941732s • [SLOW TEST:10.371 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:20:50.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0622 14:21:02.023622 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 22 14:21:02.023: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:21:02.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-603" for this suite. Jun 22 14:21:10.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:21:10.274: INFO: namespace gc-603 deletion completed in 8.234551172s • [SLOW TEST:19.641 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:21:10.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 14:21:10.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7fada47-0bbf-4733-96ab-62a785998a27" in namespace "projected-8013" to be "success or failure" Jun 22 14:21:10.424: INFO: Pod "downwardapi-volume-f7fada47-0bbf-4733-96ab-62a785998a27": Phase="Pending", Reason="", readiness=false. Elapsed: 22.68609ms Jun 22 14:21:12.461: INFO: Pod "downwardapi-volume-f7fada47-0bbf-4733-96ab-62a785998a27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059003253s Jun 22 14:21:14.464: INFO: Pod "downwardapi-volume-f7fada47-0bbf-4733-96ab-62a785998a27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062760462s STEP: Saw pod success Jun 22 14:21:14.464: INFO: Pod "downwardapi-volume-f7fada47-0bbf-4733-96ab-62a785998a27" satisfied condition "success or failure" Jun 22 14:21:14.467: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f7fada47-0bbf-4733-96ab-62a785998a27 container client-container: STEP: delete the pod Jun 22 14:21:14.515: INFO: Waiting for pod downwardapi-volume-f7fada47-0bbf-4733-96ab-62a785998a27 to disappear Jun 22 14:21:14.568: INFO: Pod downwardapi-volume-f7fada47-0bbf-4733-96ab-62a785998a27 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:21:14.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8013" for this suite. Jun 22 14:21:20.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:21:20.671: INFO: namespace projected-8013 deletion completed in 6.098708299s • [SLOW TEST:10.396 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:21:20.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9863 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 14:21:20.773: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 14:21:42.956: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.159:8080/dial?request=hostName&protocol=http&host=10.244.1.158&port=8080&tries=1'] Namespace:pod-network-test-9863 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 14:21:42.956: INFO: >>> kubeConfig: /root/.kube/config I0622 14:21:42.993693 7 log.go:172] (0xc002926210) (0xc00024bf40) Create stream I0622 14:21:42.993732 7 log.go:172] (0xc002926210) (0xc00024bf40) Stream added, broadcasting: 1 I0622 14:21:42.996000 7 log.go:172] (0xc002926210) Reply frame received for 1 I0622 14:21:42.996054 7 log.go:172] (0xc002926210) (0xc001c90000) Create stream I0622 14:21:42.996070 7 log.go:172] (0xc002926210) (0xc001c90000) Stream added, broadcasting: 3 I0622 14:21:42.997438 7 log.go:172] (0xc002926210) Reply frame received for 3 I0622 14:21:42.997529 7 log.go:172] (0xc002926210) (0xc001bfe0a0) Create stream I0622 14:21:42.997543 7 log.go:172] (0xc002926210) (0xc001bfe0a0) Stream added, broadcasting: 5 I0622 14:21:42.998853 7 log.go:172] (0xc002926210) Reply frame received for 5 I0622 14:21:43.114258 7 log.go:172] (0xc002926210) Data frame received for 3 I0622 14:21:43.114281 7 log.go:172] (0xc001c90000) (3) Data frame handling I0622 14:21:43.114293 7 log.go:172] (0xc001c90000) (3) Data frame sent I0622 14:21:43.114954 7 log.go:172] (0xc002926210) Data frame received for 5 I0622 14:21:43.114983 7 log.go:172] (0xc001bfe0a0) (5) Data frame handling I0622 14:21:43.115012 7 log.go:172] (0xc002926210) Data frame received for 3 I0622 14:21:43.115026 7 log.go:172] (0xc001c90000) (3) Data frame handling I0622 14:21:43.116877 7 log.go:172] (0xc002926210) Data frame received for 1 I0622 14:21:43.116889 7 log.go:172] (0xc00024bf40) (1) Data frame handling I0622 14:21:43.116899 7 log.go:172] (0xc00024bf40) (1) Data frame sent I0622 14:21:43.116908 7 log.go:172] (0xc002926210) (0xc00024bf40) Stream removed, broadcasting: 1 I0622 14:21:43.116916 7 log.go:172] (0xc002926210) Go away received I0622 14:21:43.117016 7 log.go:172] (0xc002926210) (0xc00024bf40) Stream removed, broadcasting: 1 I0622 14:21:43.117042 7 log.go:172] (0xc002926210) (0xc001c90000) Stream removed, broadcasting: 3 I0622 14:21:43.117062 7 log.go:172] (0xc002926210) (0xc001bfe0a0) Stream removed, broadcasting: 5 Jun 22 14:21:43.117: INFO: Waiting for endpoints: map[] Jun 22 14:21:43.120: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.159:8080/dial?request=hostName&protocol=http&host=10.244.2.246&port=8080&tries=1'] Namespace:pod-network-test-9863 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 14:21:43.120: INFO: >>> kubeConfig: /root/.kube/config I0622 14:21:43.152287 7 log.go:172] (0xc000db6bb0) (0xc0029892c0) Create stream I0622 14:21:43.152323 7 log.go:172] (0xc000db6bb0) (0xc0029892c0) Stream added, broadcasting: 1 I0622 14:21:43.154896 7 log.go:172] (0xc000db6bb0) Reply frame received for 1 I0622 14:21:43.154948 7 log.go:172] (0xc000db6bb0) (0xc001a04280) Create stream I0622 14:21:43.154966 7 log.go:172] (0xc000db6bb0) (0xc001a04280) Stream added, broadcasting: 3 I0622 14:21:43.155879 7 log.go:172] (0xc000db6bb0) Reply frame received for 3 I0622 14:21:43.155921 7 log.go:172] (0xc000db6bb0) (0xc001c900a0) Create stream I0622 14:21:43.155932 7 log.go:172] (0xc000db6bb0) (0xc001c900a0) Stream added, broadcasting: 5 I0622 14:21:43.156768 7 log.go:172] (0xc000db6bb0) Reply frame received for 5 I0622 14:21:43.224593 7 log.go:172] (0xc000db6bb0) Data frame received for 3 I0622 14:21:43.224620 7 log.go:172] (0xc001a04280) (3) Data frame handling I0622 14:21:43.224634 7 log.go:172] (0xc001a04280) (3) Data frame sent I0622 14:21:43.224798 7 log.go:172] (0xc000db6bb0) Data frame received for 5 I0622 14:21:43.224810 7 log.go:172] (0xc001c900a0) (5) Data frame handling I0622 14:21:43.224934 7 log.go:172] (0xc000db6bb0) Data frame received for 3 I0622 14:21:43.224957 7 log.go:172] (0xc001a04280) (3) Data frame handling I0622 14:21:43.226479 7 log.go:172] (0xc000db6bb0) Data frame received for 1 I0622 14:21:43.226503 7 log.go:172] (0xc0029892c0) (1) Data frame handling I0622 14:21:43.226523 7 log.go:172] (0xc0029892c0) (1) Data frame sent I0622 14:21:43.226539 7 log.go:172] (0xc000db6bb0) (0xc0029892c0) Stream removed, broadcasting: 1 I0622 14:21:43.226561 7 log.go:172] (0xc000db6bb0) Go away received I0622 14:21:43.226644 7 log.go:172] (0xc000db6bb0) (0xc0029892c0) Stream removed, broadcasting: 1 I0622 14:21:43.226665 7 log.go:172] (0xc000db6bb0) (0xc001a04280) Stream removed, broadcasting: 3 I0622 14:21:43.226679 7 log.go:172] (0xc000db6bb0) (0xc001c900a0) Stream removed, broadcasting: 5 Jun 22 14:21:43.226: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:21:43.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9863" for this suite. Jun 22 14:22:05.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:22:05.333: INFO: namespace pod-network-test-9863 deletion completed in 22.099556793s • [SLOW TEST:44.662 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:22:05.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 14:22:05.404: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7891069d-44f8-436c-a079-f0a582861f33" in namespace "downward-api-5143" to be "success or failure" Jun 22 14:22:05.432: INFO: Pod "downwardapi-volume-7891069d-44f8-436c-a079-f0a582861f33": Phase="Pending", Reason="", readiness=false. Elapsed: 28.493696ms Jun 22 14:22:07.436: INFO: Pod "downwardapi-volume-7891069d-44f8-436c-a079-f0a582861f33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032100694s Jun 22 14:22:09.439: INFO: Pod "downwardapi-volume-7891069d-44f8-436c-a079-f0a582861f33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035237088s STEP: Saw pod success Jun 22 14:22:09.439: INFO: Pod "downwardapi-volume-7891069d-44f8-436c-a079-f0a582861f33" satisfied condition "success or failure" Jun 22 14:22:09.441: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7891069d-44f8-436c-a079-f0a582861f33 container client-container: STEP: delete the pod Jun 22 14:22:09.518: INFO: Waiting for pod downwardapi-volume-7891069d-44f8-436c-a079-f0a582861f33 to disappear Jun 22 14:22:09.599: INFO: Pod downwardapi-volume-7891069d-44f8-436c-a079-f0a582861f33 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:22:09.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5143" for this suite. Jun 22 14:22:15.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:22:15.708: INFO: namespace downward-api-5143 deletion completed in 6.103415291s • [SLOW TEST:10.374 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:22:15.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 22 14:22:15.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7105' Jun 22 14:22:18.719: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 22 14:22:18.719: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jun 22 14:22:18.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7105' Jun 22 14:22:18.843: INFO: stderr: "" Jun 22 14:22:18.843: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:22:18.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7105" for this suite. Jun 22 14:22:24.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:22:24.940: INFO: namespace kubectl-7105 deletion completed in 6.094222117s • [SLOW TEST:9.232 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:22:24.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 14:22:25.002: INFO: Creating deployment "nginx-deployment" Jun 22 14:22:25.080: INFO: Waiting for observed generation 1 Jun 22 14:22:27.447: INFO: Waiting for all required pods to come up Jun 22 14:22:27.552: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 22 14:22:37.559: INFO: Waiting for deployment "nginx-deployment" to complete Jun 22 14:22:37.571: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 22 14:22:37.576: INFO: Updating deployment nginx-deployment Jun 22 14:22:37.576: INFO: Waiting for observed generation 2 Jun 22 14:22:39.642: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 22 14:22:39.690: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 22 14:22:39.692: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 22 14:22:39.700: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 22 14:22:39.700: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 22 14:22:39.702: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 22 14:22:39.707: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 22 14:22:39.707: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 22 14:22:39.712: INFO: Updating deployment nginx-deployment Jun 22 14:22:39.712: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 22 14:22:39.786: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 22 14:22:39.992: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 22 14:22:43.008: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-634,SelfLink:/apis/apps/v1/namespaces/deployment-634/deployments/nginx-deployment,UID:56f4915f-b93f-4e8a-8f3c-9ae2d05d17cd,ResourceVersion:17869709,Generation:3,CreationTimestamp:2020-06-22 14:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-06-22 14:22:39 +0000 UTC 2020-06-22 14:22:39 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-22 14:22:40 +0000 UTC 2020-06-22 14:22:25 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 22 14:22:43.038: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-634,SelfLink:/apis/apps/v1/namespaces/deployment-634/replicasets/nginx-deployment-55fb7cb77f,UID:d1267d34-a67f-4444-b736-e7e93363873c,ResourceVersion:17869705,Generation:3,CreationTimestamp:2020-06-22 14:22:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 56f4915f-b93f-4e8a-8f3c-9ae2d05d17cd 0xc0029b4767 0xc0029b4768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 14:22:43.038: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 22 14:22:43.038: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-634,SelfLink:/apis/apps/v1/namespaces/deployment-634/replicasets/nginx-deployment-7b8c6f4498,UID:89b7b7f0-41be-4f66-a2d9-cb8a2d00c657,ResourceVersion:17869699,Generation:3,CreationTimestamp:2020-06-22 14:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 56f4915f-b93f-4e8a-8f3c-9ae2d05d17cd 0xc0029b4837 0xc0029b4838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 22 14:22:43.532: INFO: Pod "nginx-deployment-55fb7cb77f-7xdgj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7xdgj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-7xdgj,UID:d21438de-6604-4490-a7e2-0727b6ef8ab5,ResourceVersion:17869637,Generation:0,CreationTimestamp:2020-06-22 14:22:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc003511977 0xc003511978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003511a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc003511a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-22 14:22:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.532: INFO: Pod "nginx-deployment-55fb7cb77f-b92cr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b92cr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-b92cr,UID:705d42d8-a9d8-4dfa-ad46-8975adca0705,ResourceVersion:17869714,Generation:0,CreationTimestamp:2020-06-22 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc003511af7 0xc003511af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003511b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc003511b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.532: INFO: Pod "nginx-deployment-55fb7cb77f-cf4q5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cf4q5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-cf4q5,UID:bb20eab9-f336-4941-9c72-c0e9173020f9,ResourceVersion:17869758,Generation:0,CreationTimestamp:2020-06-22 14:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc003511c67 0xc003511c68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003511ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003511d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.533: INFO: Pod "nginx-deployment-55fb7cb77f-cjmpg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cjmpg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-cjmpg,UID:7b3c2ac4-062c-442c-ac47-6b3bffd74bf2,ResourceVersion:17869767,Generation:0,CreationTimestamp:2020-06-22 14:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc003511dd7 0xc003511dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003511e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc003511e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.533: INFO: Pod "nginx-deployment-55fb7cb77f-g66g4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-g66g4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-g66g4,UID:197381ee-39b8-4369-90bc-6013b043d93c,ResourceVersion:17869733,Generation:0,CreationTimestamp:2020-06-22 14:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc003511f47 0xc003511f48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003511fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003511fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.533: INFO: Pod "nginx-deployment-55fb7cb77f-hgjf9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hgjf9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-hgjf9,UID:bab5e899-02d5-482a-a964-bfe34af53c1d,ResourceVersion:17869743,Generation:0,CreationTimestamp:2020-06-22 14:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc002d040b7 0xc002d040b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d04140} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d04160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.533: INFO: Pod "nginx-deployment-55fb7cb77f-j6q2k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j6q2k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-j6q2k,UID:b2f8618a-6dd6-4601-83f4-4c6b1dffccd8,ResourceVersion:17869778,Generation:0,CreationTimestamp:2020-06-22 14:22:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc002d04237 0xc002d04238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d042b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d042d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.165,StartTime:2020-06-22 14:22:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.534: INFO: Pod "nginx-deployment-55fb7cb77f-j84sp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j84sp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-j84sp,UID:e5592bb2-0cdd-4d39-97ed-168c129eb3ac,ResourceVersion:17869620,Generation:0,CreationTimestamp:2020-06-22 14:22:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc002d043e7 0xc002d043e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d04460} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d04480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-22 14:22:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.534: INFO: Pod "nginx-deployment-55fb7cb77f-jr9pl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jr9pl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-jr9pl,UID:88906d16-121c-4db7-baf7-a7c95503629b,ResourceVersion:17869771,Generation:0,CreationTimestamp:2020-06-22 14:22:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc002d04557 0xc002d04558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d045d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d045f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.254,StartTime:2020-06-22 14:22:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.534: INFO: Pod "nginx-deployment-55fb7cb77f-nb4fw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nb4fw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-nb4fw,UID:0a48241c-ef53-4586-ac61-8b4cd70600f4,ResourceVersion:17869717,Generation:0,CreationTimestamp:2020-06-22 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc002d046e7 0xc002d046e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d04760} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d04780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.534: INFO: Pod "nginx-deployment-55fb7cb77f-p7lwn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p7lwn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-p7lwn,UID:e206c74a-c152-4735-bf85-1713319abf2f,ResourceVersion:17869722,Generation:0,CreationTimestamp:2020-06-22 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc002d04857 0xc002d04858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d048d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d048f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.534: INFO: Pod "nginx-deployment-55fb7cb77f-q86t7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q86t7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-q86t7,UID:a6c99e56-a30a-4ea2-b057-a96690e4e5b0,ResourceVersion:17869635,Generation:0,CreationTimestamp:2020-06-22 14:22:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc002d049c7 0xc002d049c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d04a40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d04a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.534: INFO: Pod "nginx-deployment-55fb7cb77f-zhzgk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zhzgk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-55fb7cb77f-zhzgk,UID:5747fbaf-1576-47c1-acf1-b5b43834318d,ResourceVersion:17869740,Generation:0,CreationTimestamp:2020-06-22 14:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d1267d34-a67f-4444-b736-e7e93363873c 0xc002d04b47 0xc002d04b48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d04bc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d04be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.535: INFO: Pod "nginx-deployment-7b8c6f4498-24trg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-24trg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-24trg,UID:27001d0f-8cfe-48c5-8ccd-d7f99047688b,ResourceVersion:17869701,Generation:0,CreationTimestamp:2020-06-22 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d04cb7 0xc002d04cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d04d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d04d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.535: INFO: Pod "nginx-deployment-7b8c6f4498-4gzlr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4gzlr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-4gzlr,UID:408cc23e-9d5f-4640-9f23-d1cb978f04a9,ResourceVersion:17869776,Generation:0,CreationTimestamp:2020-06-22 14:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d04e17 0xc002d04e18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d04e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d04eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.535: INFO: Pod "nginx-deployment-7b8c6f4498-5vnqz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5vnqz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-5vnqz,UID:a30e4a84-9a41-4f0b-ad05-9c6e28b9bf71,ResourceVersion:17869751,Generation:0,CreationTimestamp:2020-06-22 14:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d04f77 0xc002d04f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d04ff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d05010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.535: INFO: Pod "nginx-deployment-7b8c6f4498-6cmcp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6cmcp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-6cmcp,UID:09535d17-46b4-4e4c-838e-e0e02cc770fd,ResourceVersion:17869770,Generation:0,CreationTimestamp:2020-06-22 14:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d050d7 0xc002d050d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d05150} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d05170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.535: INFO: Pod "nginx-deployment-7b8c6f4498-bmdq4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bmdq4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-bmdq4,UID:7b12a114-d4a7-4f3e-8847-1b7dd8ba5d5c,ResourceVersion:17869554,Generation:0,CreationTimestamp:2020-06-22 14:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d05237 0xc002d05238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d052b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d052d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.253,StartTime:2020-06-22 14:22:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 14:22:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0866c58909395d7f1a6be0e771b14f2bfa7b14347f2a2b5560895ab3823ed087}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.535: INFO: Pod "nginx-deployment-7b8c6f4498-gzn5w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gzn5w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-gzn5w,UID:e2bca1d6-87c2-4d47-96bf-76710603aac6,ResourceVersion:17869525,Generation:0,CreationTimestamp:2020-06-22 14:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d053a7 0xc002d053a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d05420} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d05440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.249,StartTime:2020-06-22 14:22:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 14:22:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ec76fc3e5b81cde890a5394e5d930c692f5b54bb34875a5191447d1ff78640b4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.536: INFO: Pod "nginx-deployment-7b8c6f4498-j5jzz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j5jzz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-j5jzz,UID:4dc4d149-755c-4c42-a0eb-1cd6261ef44d,ResourceVersion:17869708,Generation:0,CreationTimestamp:2020-06-22 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d05517 0xc002d05518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d05590} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d055b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.536: INFO: Pod "nginx-deployment-7b8c6f4498-jmfdf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jmfdf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-jmfdf,UID:b494e687-434e-4090-ac61-c216cb862e97,ResourceVersion:17869557,Generation:0,CreationTimestamp:2020-06-22 14:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d05677 0xc002d05678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d056f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d05710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.251,StartTime:2020-06-22 14:22:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 14:22:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://89ae68ec782fbd999054b76196e57ac1b86f39a9959797e7fd5a98c8cc22d987}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.536: INFO: Pod "nginx-deployment-7b8c6f4498-k6fcg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k6fcg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-k6fcg,UID:025f40ee-4bfc-43c4-9107-8760877ae6b4,ResourceVersion:17869547,Generation:0,CreationTimestamp:2020-06-22 14:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d057e7 0xc002d057e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d05860} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d05880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.160,StartTime:2020-06-22 14:22:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 14:22:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8e9a69fa23c72153039128d9b21f4576ea99c21be9523573e3baac3ca2f41a0d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.536: INFO: Pod "nginx-deployment-7b8c6f4498-kvknr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kvknr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-kvknr,UID:7df8da0e-5392-4571-8d48-c269667f44e4,ResourceVersion:17869578,Generation:0,CreationTimestamp:2020-06-22 14:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d05957 0xc002d05958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d059d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d059f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.161,StartTime:2020-06-22 14:22:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 14:22:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://869ca3446fe2b665cfa747b99e3a639a234c9fa0a05eaabbe68a6b9ddc540c89}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.536: INFO: Pod "nginx-deployment-7b8c6f4498-ljcsb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ljcsb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-ljcsb,UID:1325082f-a0a8-4790-a1ae-a630a5aaa86e,ResourceVersion:17869720,Generation:0,CreationTimestamp:2020-06-22 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d05ac7 0xc002d05ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d05b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d05b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.536: INFO: Pod "nginx-deployment-7b8c6f4498-llldb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-llldb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-llldb,UID:0efb2cad-26b6-46a6-9b3e-67372bea8d36,ResourceVersion:17869539,Generation:0,CreationTimestamp:2020-06-22 14:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d05c27 0xc002d05c28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d05ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d05cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.250,StartTime:2020-06-22 14:22:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 14:22:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9a5775ae7a6b4dad0ad5b496d3cde649efc4d62e5338a35348236647b21b4d63}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.537: INFO: Pod "nginx-deployment-7b8c6f4498-q7vjv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q7vjv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-q7vjv,UID:480f6be9-1ff6-427c-b95c-16adfe984c17,ResourceVersion:17869761,Generation:0,CreationTimestamp:2020-06-22 14:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d05d97 0xc002d05d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d05e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d05e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.537: INFO: Pod "nginx-deployment-7b8c6f4498-qp89q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qp89q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-qp89q,UID:16cf3659-0844-4ea5-a652-fd8c5b28853b,ResourceVersion:17869706,Generation:0,CreationTimestamp:2020-06-22 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002d05ef7 0xc002d05ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d05f70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d05f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.537: INFO: Pod "nginx-deployment-7b8c6f4498-r6v7c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r6v7c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-r6v7c,UID:ae3766ad-40d7-4696-8448-28360ed788bf,ResourceVersion:17869734,Generation:0,CreationTimestamp:2020-06-22 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002f16087 0xc002f16088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f16100} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f16120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.537: INFO: Pod "nginx-deployment-7b8c6f4498-tlbd9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tlbd9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-tlbd9,UID:a2e83a9e-00ca-435b-a815-b679ddec339b,ResourceVersion:17869573,Generation:0,CreationTimestamp:2020-06-22 14:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002f161f7 0xc002f161f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f16270} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f16290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.163,StartTime:2020-06-22 14:22:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 14:22:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6ca82b45cf75b259daebcf2c942990cfcfedb511ade1716b7397dc82442c8768}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.538: INFO: Pod "nginx-deployment-7b8c6f4498-v7mz8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v7mz8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-v7mz8,UID:458f0018-7d26-4b56-9f0f-8b64c803a31e,ResourceVersion:17869725,Generation:0,CreationTimestamp:2020-06-22 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002f16367 0xc002f16368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f163e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f16400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.538: INFO: Pod "nginx-deployment-7b8c6f4498-vmqbq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vmqbq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-vmqbq,UID:082954aa-b9e2-4437-b7ad-38bd94c97e87,ResourceVersion:17869766,Generation:0,CreationTimestamp:2020-06-22 14:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002f164c7 0xc002f164c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f16540} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f16560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.538: INFO: Pod "nginx-deployment-7b8c6f4498-wxdms" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wxdms,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-wxdms,UID:ef4277cf-4003-4915-96b4-23179587505b,ResourceVersion:17869737,Generation:0,CreationTimestamp:2020-06-22 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002f16627 0xc002f16628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f166a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f166c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-22 14:22:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 22 14:22:43.538: INFO: Pod "nginx-deployment-7b8c6f4498-xzgp6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xzgp6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-634,SelfLink:/api/v1/namespaces/deployment-634/pods/nginx-deployment-7b8c6f4498-xzgp6,UID:d39395b5-edb4-4215-8f92-fa7fead074f2,ResourceVersion:17869549,Generation:0,CreationTimestamp:2020-06-22 14:22:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 89b7b7f0-41be-4f66-a2d9-cb8a2d00c657 0xc002f16787 0xc002f16788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vr6wx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vr6wx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vr6wx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f16800} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f16820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:22:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.252,StartTime:2020-06-22 14:22:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-22 14:22:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d58e753228ba7ed71e69a296288b27ea2c3ee57b2cae36f7ba4facabf3082dc6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:22:43.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-634" for this suite. Jun 22 14:23:03.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:23:03.320: INFO: namespace deployment-634 deletion completed in 19.413707726s • [SLOW TEST:38.379 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:23:03.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 14:23:03.563: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 22 14:23:08.567: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 22 14:23:08.567: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 22 14:23:10.572: INFO: Creating deployment "test-rollover-deployment" Jun 22 14:23:10.595: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 22 14:23:12.603: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 22 14:23:12.610: INFO: Ensure that both replica sets have 1 created replica Jun 22 14:23:12.616: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 22 14:23:12.621: INFO: Updating deployment test-rollover-deployment Jun 22 14:23:12.621: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 22 14:23:14.635: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 22 14:23:14.642: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 22 14:23:14.649: INFO: all replica sets need to contain the pod-template-hash label Jun 22 14:23:14.649: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432592, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 14:23:16.661: INFO: all replica sets need to contain the pod-template-hash label Jun 22 14:23:16.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432596, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 14:23:18.656: INFO: all replica sets need to contain the pod-template-hash label Jun 22 14:23:18.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432596, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 14:23:20.657: INFO: all replica sets need to contain the pod-template-hash label Jun 22 14:23:20.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432596, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 14:23:22.657: INFO: all replica sets need to contain the pod-template-hash label Jun 22 14:23:22.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432596, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 14:23:24.657: INFO: all replica sets need to contain the pod-template-hash label Jun 22 14:23:24.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432596, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728432590, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 22 14:23:26.658: INFO: Jun 22 14:23:26.658: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 22 14:23:26.750: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-2463,SelfLink:/apis/apps/v1/namespaces/deployment-2463/deployments/test-rollover-deployment,UID:f40fdc58-abaf-478c-a407-e2da72f1af75,ResourceVersion:17870152,Generation:2,CreationTimestamp:2020-06-22 14:23:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-22 14:23:10 +0000 UTC 2020-06-22 14:23:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-22 14:23:26 +0000 UTC 2020-06-22 14:23:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 22 14:23:26.754: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-2463,SelfLink:/apis/apps/v1/namespaces/deployment-2463/replicasets/test-rollover-deployment-854595fc44,UID:cc020297-9744-4cda-84bd-81488b268b4b,ResourceVersion:17870140,Generation:2,CreationTimestamp:2020-06-22 14:23:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f40fdc58-abaf-478c-a407-e2da72f1af75 0xc0020a6077 0xc0020a6078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 22 14:23:26.754: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 22 14:23:26.754: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-2463,SelfLink:/apis/apps/v1/namespaces/deployment-2463/replicasets/test-rollover-controller,UID:107a8436-9235-4fed-addf-71910ea2992e,ResourceVersion:17870151,Generation:2,CreationTimestamp:2020-06-22 14:23:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f40fdc58-abaf-478c-a407-e2da72f1af75 0xc0025efef7 0xc0025efef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 14:23:26.754: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-2463,SelfLink:/apis/apps/v1/namespaces/deployment-2463/replicasets/test-rollover-deployment-9b8b997cf,UID:3f3829cf-ea35-45cf-abe8-257f59169c1c,ResourceVersion:17870105,Generation:2,CreationTimestamp:2020-06-22 14:23:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f40fdc58-abaf-478c-a407-e2da72f1af75 0xc0020a6140 0xc0020a6141}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 22 14:23:26.757: INFO: Pod "test-rollover-deployment-854595fc44-zhxz2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-zhxz2,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-2463,SelfLink:/api/v1/namespaces/deployment-2463/pods/test-rollover-deployment-854595fc44-zhxz2,UID:118ac9b5-acef-4086-a79d-9c7a72771934,ResourceVersion:17870118,Generation:0,CreationTimestamp:2020-06-22 14:23:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 cc020297-9744-4cda-84bd-81488b268b4b 0xc0020a7977 0xc0020a7978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d6bg7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d6bg7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-d6bg7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020a7a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020a7a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:23:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:23:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:23:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:23:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.178,StartTime:2020-06-22 14:23:12 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-22 14:23:15 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://f99ae341c032c15fc6a7b8d8f6704f8e0ef6ef4804de68e8afcd34759f8271f8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:23:26.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2463" for this suite. Jun 22 14:23:32.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:23:32.864: INFO: namespace deployment-2463 deletion completed in 6.104419956s • [SLOW TEST:29.544 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:23:32.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jun 22 14:23:33.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 22 14:23:33.149: INFO: stderr: "" Jun 22 14:23:33.149: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:23:33.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9141" for this suite. Jun 22 14:23:39.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:23:39.277: INFO: namespace kubectl-9141 deletion completed in 6.088849957s • [SLOW TEST:6.412 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:23:39.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0ba7d47a-8f54-47c2-bfce-deccfee6bbbf STEP: Creating a pod to test consume secrets Jun 22 14:23:39.345: INFO: Waiting up to 5m0s for pod "pod-secrets-a70a246b-de66-40b2-b310-a85ad37c8411" in namespace "secrets-4134" to be "success or failure" Jun 22 14:23:39.356: INFO: Pod "pod-secrets-a70a246b-de66-40b2-b310-a85ad37c8411": Phase="Pending", Reason="", readiness=false. Elapsed: 10.346028ms Jun 22 14:23:41.360: INFO: Pod "pod-secrets-a70a246b-de66-40b2-b310-a85ad37c8411": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014392738s Jun 22 14:23:43.364: INFO: Pod "pod-secrets-a70a246b-de66-40b2-b310-a85ad37c8411": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018582003s STEP: Saw pod success Jun 22 14:23:43.364: INFO: Pod "pod-secrets-a70a246b-de66-40b2-b310-a85ad37c8411" satisfied condition "success or failure" Jun 22 14:23:43.368: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-a70a246b-de66-40b2-b310-a85ad37c8411 container secret-volume-test: STEP: delete the pod Jun 22 14:23:43.496: INFO: Waiting for pod pod-secrets-a70a246b-de66-40b2-b310-a85ad37c8411 to disappear Jun 22 14:23:43.518: INFO: Pod pod-secrets-a70a246b-de66-40b2-b310-a85ad37c8411 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:23:43.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4134" for this suite. Jun 22 14:23:49.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:23:49.678: INFO: namespace secrets-4134 deletion completed in 6.156613037s • [SLOW TEST:10.401 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:23:49.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:23:49.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3770" for this suite. Jun 22 14:23:55.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:23:56.020: INFO: namespace kubelet-test-3770 deletion completed in 6.113604284s • [SLOW TEST:6.342 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:23:56.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:24:00.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8514" for this suite. Jun 22 14:24:06.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:24:06.264: INFO: namespace kubelet-test-8514 deletion completed in 6.144526147s • [SLOW TEST:10.244 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:24:06.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 22 14:24:06.305: INFO: Waiting up to 5m0s for pod "pod-0551aa2c-c5bd-4578-84d7-268dfe8fa9a8" in namespace "emptydir-5140" to be "success or failure" Jun 22 14:24:06.376: INFO: Pod "pod-0551aa2c-c5bd-4578-84d7-268dfe8fa9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 71.545714ms Jun 22 14:24:08.380: INFO: Pod "pod-0551aa2c-c5bd-4578-84d7-268dfe8fa9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075322878s Jun 22 14:24:10.389: INFO: Pod "pod-0551aa2c-c5bd-4578-84d7-268dfe8fa9a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084635585s STEP: Saw pod success Jun 22 14:24:10.389: INFO: Pod "pod-0551aa2c-c5bd-4578-84d7-268dfe8fa9a8" satisfied condition "success or failure" Jun 22 14:24:10.391: INFO: Trying to get logs from node iruya-worker2 pod pod-0551aa2c-c5bd-4578-84d7-268dfe8fa9a8 container test-container: STEP: delete the pod Jun 22 14:24:10.519: INFO: Waiting for pod pod-0551aa2c-c5bd-4578-84d7-268dfe8fa9a8 to disappear Jun 22 14:24:10.567: INFO: Pod pod-0551aa2c-c5bd-4578-84d7-268dfe8fa9a8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:24:10.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5140" for this suite. Jun 22 14:24:16.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:24:16.832: INFO: namespace emptydir-5140 deletion completed in 6.260404845s • [SLOW TEST:10.567 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:24:16.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 22 14:24:16.874: INFO: Waiting up to 5m0s for pod "downward-api-edc8873b-f0e8-44f9-8cb7-52165dba32db" in namespace "downward-api-6878" to be "success or failure" Jun 22 14:24:16.902: INFO: Pod "downward-api-edc8873b-f0e8-44f9-8cb7-52165dba32db": Phase="Pending", Reason="", readiness=false. Elapsed: 27.486863ms Jun 22 14:24:18.906: INFO: Pod "downward-api-edc8873b-f0e8-44f9-8cb7-52165dba32db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032124206s Jun 22 14:24:20.911: INFO: Pod "downward-api-edc8873b-f0e8-44f9-8cb7-52165dba32db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036423879s STEP: Saw pod success Jun 22 14:24:20.911: INFO: Pod "downward-api-edc8873b-f0e8-44f9-8cb7-52165dba32db" satisfied condition "success or failure" Jun 22 14:24:20.913: INFO: Trying to get logs from node iruya-worker pod downward-api-edc8873b-f0e8-44f9-8cb7-52165dba32db container dapi-container: STEP: delete the pod Jun 22 14:24:21.012: INFO: Waiting for pod downward-api-edc8873b-f0e8-44f9-8cb7-52165dba32db to disappear Jun 22 14:24:21.017: INFO: Pod downward-api-edc8873b-f0e8-44f9-8cb7-52165dba32db no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:24:21.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6878" for this suite. Jun 22 14:24:27.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:24:27.121: INFO: namespace downward-api-6878 deletion completed in 6.100273011s • [SLOW TEST:10.288 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:24:27.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 22 14:24:27.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6152' Jun 22 14:24:27.439: INFO: stderr: "" Jun 22 14:24:27.439: INFO: stdout: "replicationcontroller/redis-master created\n" Jun 22 14:24:27.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6152' Jun 22 14:24:27.726: INFO: stderr: "" Jun 22 14:24:27.726: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jun 22 14:24:28.731: INFO: Selector matched 1 pods for map[app:redis] Jun 22 14:24:28.731: INFO: Found 0 / 1 Jun 22 14:24:29.731: INFO: Selector matched 1 pods for map[app:redis] Jun 22 14:24:29.731: INFO: Found 0 / 1 Jun 22 14:24:30.731: INFO: Selector matched 1 pods for map[app:redis] Jun 22 14:24:30.731: INFO: Found 0 / 1 Jun 22 14:24:31.731: INFO: Selector matched 1 pods for map[app:redis] Jun 22 14:24:31.731: INFO: Found 1 / 1 Jun 22 14:24:31.731: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 22 14:24:31.735: INFO: Selector matched 1 pods for map[app:redis] Jun 22 14:24:31.735: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 22 14:24:31.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-bmrpd --namespace=kubectl-6152' Jun 22 14:24:31.846: INFO: stderr: "" Jun 22 14:24:31.846: INFO: stdout: "Name: redis-master-bmrpd\nNamespace: kubectl-6152\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Mon, 22 Jun 2020 14:24:27 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.17\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://6b39d096cd75e88b12715d08c2578d810385fe7a757fa1e622a04029ecc1aca4\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 22 Jun 2020 14:24:30 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4m9d4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4m9d4:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4m9d4\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-6152/redis-master-bmrpd to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Jun 22 14:24:31.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-6152' Jun 22 14:24:31.978: INFO: stderr: "" Jun 22 14:24:31.978: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6152\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-bmrpd\n" Jun 22 14:24:31.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-6152' Jun 22 14:24:32.085: INFO: stderr: "" Jun 22 14:24:32.085: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6152\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.111.246.145\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.17:6379\nSession Affinity: None\nEvents: \n" Jun 22 14:24:32.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Jun 22 14:24:32.208: INFO: stderr: "" Jun 22 14:24:32.208: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 22 Jun 2020 14:24:17 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 22 Jun 2020 14:24:17 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 22 Jun 2020 14:24:17 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 22 Jun 2020 14:24:17 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 98d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 98d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 98d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 98d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 98d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 98d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 98d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 22 14:24:32.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6152' Jun 22 14:24:32.304: INFO: stderr: "" Jun 22 14:24:32.304: INFO: stdout: "Name: kubectl-6152\nLabels: e2e-framework=kubectl\n e2e-run=f3ca41a4-0a95-4d7d-8964-dd6f46d82336\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:24:32.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6152" for this suite. Jun 22 14:24:54.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:24:54.447: INFO: namespace kubectl-6152 deletion completed in 22.139712112s • [SLOW TEST:27.325 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:24:54.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-5664 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5664 to expose endpoints map[] Jun 22 14:24:54.575: INFO: Get endpoints failed (20.905846ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 22 14:24:55.578: INFO: successfully validated that service multi-endpoint-test in namespace services-5664 exposes endpoints map[] (1.024276143s elapsed) STEP: Creating pod pod1 in namespace services-5664 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5664 to expose endpoints map[pod1:[100]] Jun 22 14:24:59.666: INFO: successfully validated that service multi-endpoint-test in namespace services-5664 exposes endpoints map[pod1:[100]] (4.079541959s elapsed) STEP: Creating pod pod2 in namespace services-5664 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5664 to expose endpoints map[pod1:[100] pod2:[101]] Jun 22 14:25:03.778: INFO: successfully validated that service multi-endpoint-test in namespace services-5664 exposes endpoints map[pod1:[100] pod2:[101]] (4.107713344s elapsed) STEP: Deleting pod pod1 in namespace services-5664 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5664 to expose endpoints map[pod2:[101]] Jun 22 14:25:03.796: INFO: successfully validated that service multi-endpoint-test in namespace services-5664 exposes endpoints map[pod2:[101]] (13.761963ms elapsed) STEP: Deleting pod pod2 in namespace services-5664 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5664 to expose endpoints map[] Jun 22 14:25:03.820: INFO: successfully validated that service multi-endpoint-test in namespace services-5664 exposes endpoints map[] (20.26918ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:25:03.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5664" for this suite. Jun 22 14:25:25.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:25:25.954: INFO: namespace services-5664 deletion completed in 22.078887132s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.507 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:25:25.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jun 22 14:25:26.060: INFO: Waiting up to 5m0s for pod "client-containers-2b9518af-7c97-4ff1-8e4b-195804214614" in namespace "containers-7385" to be "success or failure" Jun 22 14:25:26.091: INFO: Pod "client-containers-2b9518af-7c97-4ff1-8e4b-195804214614": Phase="Pending", Reason="", readiness=false. Elapsed: 31.510473ms Jun 22 14:25:28.189: INFO: Pod "client-containers-2b9518af-7c97-4ff1-8e4b-195804214614": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129582663s Jun 22 14:25:30.194: INFO: Pod "client-containers-2b9518af-7c97-4ff1-8e4b-195804214614": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134754259s STEP: Saw pod success Jun 22 14:25:30.195: INFO: Pod "client-containers-2b9518af-7c97-4ff1-8e4b-195804214614" satisfied condition "success or failure" Jun 22 14:25:30.198: INFO: Trying to get logs from node iruya-worker2 pod client-containers-2b9518af-7c97-4ff1-8e4b-195804214614 container test-container: STEP: delete the pod Jun 22 14:25:30.216: INFO: Waiting for pod client-containers-2b9518af-7c97-4ff1-8e4b-195804214614 to disappear Jun 22 14:25:30.221: INFO: Pod client-containers-2b9518af-7c97-4ff1-8e4b-195804214614 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:25:30.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7385" for this suite. Jun 22 14:25:36.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:25:36.331: INFO: namespace containers-7385 deletion completed in 6.106563056s • [SLOW TEST:10.377 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:25:36.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7650 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 22 14:25:36.379: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 22 14:26:04.491: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.183:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7650 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 14:26:04.491: INFO: >>> kubeConfig: /root/.kube/config I0622 14:26:04.522515 7 log.go:172] (0xc001dd0420) (0xc002719cc0) Create stream I0622 14:26:04.522551 7 log.go:172] (0xc001dd0420) (0xc002719cc0) Stream added, broadcasting: 1 I0622 14:26:04.524383 7 log.go:172] (0xc001dd0420) Reply frame received for 1 I0622 14:26:04.524439 7 log.go:172] (0xc001dd0420) (0xc002c6b720) Create stream I0622 14:26:04.524452 7 log.go:172] (0xc001dd0420) (0xc002c6b720) Stream added, broadcasting: 3 I0622 14:26:04.525774 7 log.go:172] (0xc001dd0420) Reply frame received for 3 I0622 14:26:04.525815 7 log.go:172] (0xc001dd0420) (0xc002719e00) Create stream I0622 14:26:04.525828 7 log.go:172] (0xc001dd0420) (0xc002719e00) Stream added, broadcasting: 5 I0622 14:26:04.526768 7 log.go:172] (0xc001dd0420) Reply frame received for 5 I0622 14:26:04.597381 7 log.go:172] (0xc001dd0420) Data frame received for 3 I0622 14:26:04.597414 7 log.go:172] (0xc002c6b720) (3) Data frame handling I0622 14:26:04.597424 7 log.go:172] (0xc002c6b720) (3) Data frame sent I0622 14:26:04.597430 7 log.go:172] (0xc001dd0420) Data frame received for 3 I0622 14:26:04.597435 7 log.go:172] (0xc002c6b720) (3) Data frame handling I0622 14:26:04.597504 7 log.go:172] (0xc001dd0420) Data frame received for 5 I0622 14:26:04.597544 7 log.go:172] (0xc002719e00) (5) Data frame handling I0622 14:26:04.599290 7 log.go:172] (0xc001dd0420) Data frame received for 1 I0622 14:26:04.599316 7 log.go:172] (0xc002719cc0) (1) Data frame handling I0622 14:26:04.599351 7 log.go:172] (0xc002719cc0) (1) Data frame sent I0622 14:26:04.599369 7 log.go:172] (0xc001dd0420) (0xc002719cc0) Stream removed, broadcasting: 1 I0622 14:26:04.599471 7 log.go:172] (0xc001dd0420) (0xc002719cc0) Stream removed, broadcasting: 1 I0622 14:26:04.599547 7 log.go:172] (0xc001dd0420) (0xc002c6b720) Stream removed, broadcasting: 3 I0622 14:26:04.599739 7 log.go:172] (0xc001dd0420) (0xc002719e00) Stream removed, broadcasting: 5 Jun 22 14:26:04.599: INFO: Found all expected endpoints: [netserver-0] I0622 14:26:04.599789 7 log.go:172] (0xc001dd0420) Go away received Jun 22 14:26:04.603: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.19:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7650 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 22 14:26:04.603: INFO: >>> kubeConfig: /root/.kube/config I0622 14:26:04.634245 7 log.go:172] (0xc002ef4c60) (0xc00274a6e0) Create stream I0622 14:26:04.634269 7 log.go:172] (0xc002ef4c60) (0xc00274a6e0) Stream added, broadcasting: 1 I0622 14:26:04.636217 7 log.go:172] (0xc002ef4c60) Reply frame received for 1 I0622 14:26:04.636265 7 log.go:172] (0xc002ef4c60) (0xc002c6b7c0) Create stream I0622 14:26:04.636277 7 log.go:172] (0xc002ef4c60) (0xc002c6b7c0) Stream added, broadcasting: 3 I0622 14:26:04.637550 7 log.go:172] (0xc002ef4c60) Reply frame received for 3 I0622 14:26:04.637616 7 log.go:172] (0xc002ef4c60) (0xc000d7e3c0) Create stream I0622 14:26:04.637638 7 log.go:172] (0xc002ef4c60) (0xc000d7e3c0) Stream added, broadcasting: 5 I0622 14:26:04.638881 7 log.go:172] (0xc002ef4c60) Reply frame received for 5 I0622 14:26:04.712071 7 log.go:172] (0xc002ef4c60) Data frame received for 5 I0622 14:26:04.712191 7 log.go:172] (0xc000d7e3c0) (5) Data frame handling I0622 14:26:04.712220 7 log.go:172] (0xc002ef4c60) Data frame received for 3 I0622 14:26:04.712230 7 log.go:172] (0xc002c6b7c0) (3) Data frame handling I0622 14:26:04.712248 7 log.go:172] (0xc002c6b7c0) (3) Data frame sent I0622 14:26:04.712260 7 log.go:172] (0xc002ef4c60) Data frame received for 3 I0622 14:26:04.712269 7 log.go:172] (0xc002c6b7c0) (3) Data frame handling I0622 14:26:04.714298 7 log.go:172] (0xc002ef4c60) Data frame received for 1 I0622 14:26:04.714312 7 log.go:172] (0xc00274a6e0) (1) Data frame handling I0622 14:26:04.714320 7 log.go:172] (0xc00274a6e0) (1) Data frame sent I0622 14:26:04.714327 7 log.go:172] (0xc002ef4c60) (0xc00274a6e0) Stream removed, broadcasting: 1 I0622 14:26:04.714380 7 log.go:172] (0xc002ef4c60) Go away received I0622 14:26:04.714406 7 log.go:172] (0xc002ef4c60) (0xc00274a6e0) Stream removed, broadcasting: 1 I0622 14:26:04.714421 7 log.go:172] (0xc002ef4c60) (0xc002c6b7c0) Stream removed, broadcasting: 3 I0622 14:26:04.714437 7 log.go:172] (0xc002ef4c60) (0xc000d7e3c0) Stream removed, broadcasting: 5 Jun 22 14:26:04.714: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:26:04.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7650" for this suite. Jun 22 14:26:28.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:26:28.804: INFO: namespace pod-network-test-7650 deletion completed in 24.085400895s • [SLOW TEST:52.473 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:26:28.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 22 14:26:28.907: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:26:28.912: INFO: Number of nodes with available pods: 0 Jun 22 14:26:28.912: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:26:29.917: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:26:29.920: INFO: Number of nodes with available pods: 0 Jun 22 14:26:29.920: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:26:30.918: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:26:30.922: INFO: Number of nodes with available pods: 0 Jun 22 14:26:30.922: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:26:32.041: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:26:32.044: INFO: Number of nodes with available pods: 0 Jun 22 14:26:32.044: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:26:32.917: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:26:32.922: INFO: Number of nodes with available pods: 1 Jun 22 14:26:32.922: INFO: Node iruya-worker is running more than one daemon pod Jun 22 14:26:33.918: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:26:33.922: INFO: Number of nodes with available pods: 2 Jun 22 14:26:33.922: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 22 14:26:33.949: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 22 14:26:33.981: INFO: Number of nodes with available pods: 2 Jun 22 14:26:33.981: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6063, will wait for the garbage collector to delete the pods Jun 22 14:26:35.065: INFO: Deleting DaemonSet.extensions daemon-set took: 6.609237ms Jun 22 14:26:35.365: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27543ms Jun 22 14:26:42.168: INFO: Number of nodes with available pods: 0 Jun 22 14:26:42.168: INFO: Number of running nodes: 0, number of available pods: 0 Jun 22 14:26:42.171: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6063/daemonsets","resourceVersion":"17870914"},"items":null} Jun 22 14:26:42.173: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6063/pods","resourceVersion":"17870914"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:26:42.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6063" for this suite. Jun 22 14:26:48.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:26:48.275: INFO: namespace daemonsets-6063 deletion completed in 6.091603671s • [SLOW TEST:19.470 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:26:48.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-00c4f779-55bf-41ea-8eac-4b0c8d7dcaa9 STEP: Creating a pod to test consume configMaps Jun 22 14:26:48.378: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bce397aa-3753-4475-9e38-95794806c9ca" in namespace "projected-9830" to be "success or failure" Jun 22 14:26:48.399: INFO: Pod "pod-projected-configmaps-bce397aa-3753-4475-9e38-95794806c9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 20.262452ms Jun 22 14:26:50.402: INFO: Pod "pod-projected-configmaps-bce397aa-3753-4475-9e38-95794806c9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023912494s Jun 22 14:26:52.407: INFO: Pod "pod-projected-configmaps-bce397aa-3753-4475-9e38-95794806c9ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028748219s STEP: Saw pod success Jun 22 14:26:52.407: INFO: Pod "pod-projected-configmaps-bce397aa-3753-4475-9e38-95794806c9ca" satisfied condition "success or failure" Jun 22 14:26:52.410: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-bce397aa-3753-4475-9e38-95794806c9ca container projected-configmap-volume-test: STEP: delete the pod Jun 22 14:26:52.443: INFO: Waiting for pod pod-projected-configmaps-bce397aa-3753-4475-9e38-95794806c9ca to disappear Jun 22 14:26:52.465: INFO: Pod pod-projected-configmaps-bce397aa-3753-4475-9e38-95794806c9ca no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:26:52.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9830" for this suite. Jun 22 14:26:58.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:26:58.569: INFO: namespace projected-9830 deletion completed in 6.10102182s • [SLOW TEST:10.294 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:26:58.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:27:02.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5041" for this suite. Jun 22 14:27:40.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:27:40.821: INFO: namespace kubelet-test-5041 deletion completed in 38.128516043s • [SLOW TEST:42.252 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:27:40.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-e2a1725d-93b9-4bc4-b3f4-71c7be3279b6 STEP: Creating a pod to test consume secrets Jun 22 14:27:40.912: INFO: Waiting up to 5m0s for pod "pod-secrets-7942039b-4d06-42b1-9026-851b43d5544f" in namespace "secrets-6470" to be "success or failure" Jun 22 14:27:40.918: INFO: Pod "pod-secrets-7942039b-4d06-42b1-9026-851b43d5544f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.300927ms Jun 22 14:27:42.923: INFO: Pod "pod-secrets-7942039b-4d06-42b1-9026-851b43d5544f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010718465s Jun 22 14:27:44.927: INFO: Pod "pod-secrets-7942039b-4d06-42b1-9026-851b43d5544f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015223026s STEP: Saw pod success Jun 22 14:27:44.927: INFO: Pod "pod-secrets-7942039b-4d06-42b1-9026-851b43d5544f" satisfied condition "success or failure" Jun 22 14:27:44.931: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-7942039b-4d06-42b1-9026-851b43d5544f container secret-volume-test: STEP: delete the pod Jun 22 14:27:44.964: INFO: Waiting for pod pod-secrets-7942039b-4d06-42b1-9026-851b43d5544f to disappear Jun 22 14:27:44.993: INFO: Pod pod-secrets-7942039b-4d06-42b1-9026-851b43d5544f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:27:44.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6470" for this suite. Jun 22 14:27:51.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:27:51.082: INFO: namespace secrets-6470 deletion completed in 6.085678852s • [SLOW TEST:10.261 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:27:51.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-67db1d70-7e84-45fa-a45b-3a49498289ab STEP: Creating a pod to test consume secrets Jun 22 14:27:51.221: INFO: Waiting up to 5m0s for pod "pod-secrets-4d885c4a-7bda-49a2-950f-8d5da6c70dbd" in namespace "secrets-5171" to be "success or failure" Jun 22 14:27:51.230: INFO: Pod "pod-secrets-4d885c4a-7bda-49a2-950f-8d5da6c70dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.380486ms Jun 22 14:27:53.233: INFO: Pod "pod-secrets-4d885c4a-7bda-49a2-950f-8d5da6c70dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012632155s Jun 22 14:27:55.238: INFO: Pod "pod-secrets-4d885c4a-7bda-49a2-950f-8d5da6c70dbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017165419s STEP: Saw pod success Jun 22 14:27:55.238: INFO: Pod "pod-secrets-4d885c4a-7bda-49a2-950f-8d5da6c70dbd" satisfied condition "success or failure" Jun 22 14:27:55.240: INFO: Trying to get logs from node iruya-worker pod pod-secrets-4d885c4a-7bda-49a2-950f-8d5da6c70dbd container secret-volume-test: STEP: delete the pod Jun 22 14:27:55.263: INFO: Waiting for pod pod-secrets-4d885c4a-7bda-49a2-950f-8d5da6c70dbd to disappear Jun 22 14:27:55.278: INFO: Pod pod-secrets-4d885c4a-7bda-49a2-950f-8d5da6c70dbd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:27:55.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5171" for this suite. Jun 22 14:28:01.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:28:01.381: INFO: namespace secrets-5171 deletion completed in 6.098673284s • [SLOW TEST:10.298 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:28:01.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 22 14:28:01.432: INFO: namespace kubectl-3297 Jun 22 14:28:01.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3297' Jun 22 14:28:01.704: INFO: stderr: "" Jun 22 14:28:01.704: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 22 14:28:02.708: INFO: Selector matched 1 pods for map[app:redis] Jun 22 14:28:02.708: INFO: Found 0 / 1 Jun 22 14:28:03.708: INFO: Selector matched 1 pods for map[app:redis] Jun 22 14:28:03.708: INFO: Found 0 / 1 Jun 22 14:28:04.709: INFO: Selector matched 1 pods for map[app:redis] Jun 22 14:28:04.709: INFO: Found 0 / 1 Jun 22 14:28:05.709: INFO: Selector matched 1 pods for map[app:redis] Jun 22 14:28:05.709: INFO: Found 1 / 1 Jun 22 14:28:05.709: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 22 14:28:05.713: INFO: Selector matched 1 pods for map[app:redis] Jun 22 14:28:05.713: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 22 14:28:05.713: INFO: wait on redis-master startup in kubectl-3297 Jun 22 14:28:05.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-24bbf redis-master --namespace=kubectl-3297' Jun 22 14:28:05.827: INFO: stderr: "" Jun 22 14:28:05.827: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Jun 14:28:04.532 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jun 14:28:04.532 # Server started, Redis version 3.2.12\n1:M 22 Jun 14:28:04.532 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jun 14:28:04.532 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jun 22 14:28:05.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3297' Jun 22 14:28:05.991: INFO: stderr: "" Jun 22 14:28:05.991: INFO: stdout: "service/rm2 exposed\n" Jun 22 14:28:06.006: INFO: Service rm2 in namespace kubectl-3297 found. STEP: exposing service Jun 22 14:28:08.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3297' Jun 22 14:28:08.153: INFO: stderr: "" Jun 22 14:28:08.153: INFO: stdout: "service/rm3 exposed\n" Jun 22 14:28:08.161: INFO: Service rm3 in namespace kubectl-3297 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:28:10.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3297" for this suite. Jun 22 14:28:38.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:28:38.306: INFO: namespace kubectl-3297 deletion completed in 28.137512683s • [SLOW TEST:36.925 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:28:38.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 22 14:28:42.456: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:28:42.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5678" for this suite. Jun 22 14:28:48.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:28:48.616: INFO: namespace container-runtime-5678 deletion completed in 6.090318991s • [SLOW TEST:10.308 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:28:48.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-731eace8-5eed-48e6-be96-7fd14eb4e18a STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-731eace8-5eed-48e6-be96-7fd14eb4e18a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:28:54.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2672" for this suite. Jun 22 14:29:16.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:29:16.866: INFO: namespace projected-2672 deletion completed in 22.112060361s • [SLOW TEST:28.250 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:29:16.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:29:16.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3693" for this suite. Jun 22 14:29:22.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:29:23.048: INFO: namespace services-3693 deletion completed in 6.089505554s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.182 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:29:23.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 22 14:29:27.215: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:29:27.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9801" for this suite. Jun 22 14:29:33.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:29:33.572: INFO: namespace container-runtime-9801 deletion completed in 6.178843343s • [SLOW TEST:10.523 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:29:33.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-f9d4f5c4-02f3-42c5-a36e-c2db5df61569 STEP: Creating a pod to test consume secrets Jun 22 14:29:33.644: INFO: Waiting up to 5m0s for pod "pod-secrets-af7015c2-a917-4d33-a187-092e52217fd7" in namespace "secrets-1958" to be "success or failure" Jun 22 14:29:33.658: INFO: Pod "pod-secrets-af7015c2-a917-4d33-a187-092e52217fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.582735ms Jun 22 14:29:35.662: INFO: Pod "pod-secrets-af7015c2-a917-4d33-a187-092e52217fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017971127s Jun 22 14:29:37.666: INFO: Pod "pod-secrets-af7015c2-a917-4d33-a187-092e52217fd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021762471s STEP: Saw pod success Jun 22 14:29:37.666: INFO: Pod "pod-secrets-af7015c2-a917-4d33-a187-092e52217fd7" satisfied condition "success or failure" Jun 22 14:29:37.668: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-af7015c2-a917-4d33-a187-092e52217fd7 container secret-env-test: STEP: delete the pod Jun 22 14:29:37.685: INFO: Waiting for pod pod-secrets-af7015c2-a917-4d33-a187-092e52217fd7 to disappear Jun 22 14:29:37.690: INFO: Pod pod-secrets-af7015c2-a917-4d33-a187-092e52217fd7 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:29:37.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1958" for this suite. Jun 22 14:29:43.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:29:43.786: INFO: namespace secrets-1958 deletion completed in 6.092991732s • [SLOW TEST:10.214 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:29:43.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 22 14:29:49.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-42e69d0d-122e-440a-a8cf-1004f4a6324b -c busybox-main-container --namespace=emptydir-5571 -- cat /usr/share/volumeshare/shareddata.txt' Jun 22 14:29:50.128: INFO: stderr: "I0622 14:29:50.019897 3153 log.go:172] (0xc00012a6e0) (0xc0005e0be0) Create stream\nI0622 14:29:50.019967 3153 log.go:172] (0xc00012a6e0) (0xc0005e0be0) Stream added, broadcasting: 1\nI0622 14:29:50.022189 3153 log.go:172] (0xc00012a6e0) Reply frame received for 1\nI0622 14:29:50.022239 3153 log.go:172] (0xc00012a6e0) (0xc00098e000) Create stream\nI0622 14:29:50.022256 3153 log.go:172] (0xc00012a6e0) (0xc00098e000) Stream added, broadcasting: 3\nI0622 14:29:50.023458 3153 log.go:172] (0xc00012a6e0) Reply frame received for 3\nI0622 14:29:50.023509 3153 log.go:172] (0xc00012a6e0) (0xc00083a000) Create stream\nI0622 14:29:50.023533 3153 log.go:172] (0xc00012a6e0) (0xc00083a000) Stream added, broadcasting: 5\nI0622 14:29:50.024455 3153 log.go:172] (0xc00012a6e0) Reply frame received for 5\nI0622 14:29:50.122251 3153 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0622 14:29:50.122288 3153 log.go:172] (0xc00083a000) (5) Data frame handling\nI0622 14:29:50.122311 3153 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0622 14:29:50.122319 3153 log.go:172] (0xc00098e000) (3) Data frame handling\nI0622 14:29:50.122326 3153 log.go:172] (0xc00098e000) (3) Data frame sent\nI0622 14:29:50.122332 3153 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0622 14:29:50.122337 3153 log.go:172] (0xc00098e000) (3) Data frame handling\nI0622 14:29:50.123540 3153 log.go:172] (0xc00012a6e0) Data frame received for 1\nI0622 14:29:50.123561 3153 log.go:172] (0xc0005e0be0) (1) Data frame handling\nI0622 14:29:50.123572 3153 log.go:172] (0xc0005e0be0) (1) Data frame sent\nI0622 14:29:50.123582 3153 log.go:172] (0xc00012a6e0) (0xc0005e0be0) Stream removed, broadcasting: 1\nI0622 14:29:50.123597 3153 log.go:172] (0xc00012a6e0) Go away received\nI0622 14:29:50.123969 3153 log.go:172] (0xc00012a6e0) (0xc0005e0be0) Stream removed, broadcasting: 1\nI0622 14:29:50.123991 3153 log.go:172] (0xc00012a6e0) (0xc00098e000) Stream removed, broadcasting: 3\nI0622 14:29:50.124001 3153 log.go:172] (0xc00012a6e0) (0xc00083a000) Stream removed, broadcasting: 5\n" Jun 22 14:29:50.128: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:29:50.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5571" for this suite. Jun 22 14:29:56.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:29:56.258: INFO: namespace emptydir-5571 deletion completed in 6.126278662s • [SLOW TEST:12.471 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:29:56.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 22 14:30:04.429: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 14:30:04.433: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 14:30:06.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 14:30:06.436: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 14:30:08.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 14:30:08.438: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 14:30:10.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 14:30:10.438: INFO: Pod pod-with-poststart-http-hook still exists Jun 22 14:30:12.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 22 14:30:12.446: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:30:12.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6761" for this suite. Jun 22 14:30:34.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:30:34.566: INFO: namespace container-lifecycle-hook-6761 deletion completed in 22.115911192s • [SLOW TEST:38.308 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:30:34.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 22 14:30:34.667: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-a,UID:590a1719-6b5b-4d12-ad68-fd34bfbfb0b3,ResourceVersion:17871714,Generation:0,CreationTimestamp:2020-06-22 14:30:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 14:30:34.667: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-a,UID:590a1719-6b5b-4d12-ad68-fd34bfbfb0b3,ResourceVersion:17871714,Generation:0,CreationTimestamp:2020-06-22 14:30:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 22 14:30:44.676: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-a,UID:590a1719-6b5b-4d12-ad68-fd34bfbfb0b3,ResourceVersion:17871734,Generation:0,CreationTimestamp:2020-06-22 14:30:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 22 14:30:44.676: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-a,UID:590a1719-6b5b-4d12-ad68-fd34bfbfb0b3,ResourceVersion:17871734,Generation:0,CreationTimestamp:2020-06-22 14:30:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 22 14:30:54.684: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-a,UID:590a1719-6b5b-4d12-ad68-fd34bfbfb0b3,ResourceVersion:17871754,Generation:0,CreationTimestamp:2020-06-22 14:30:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 14:30:54.684: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-a,UID:590a1719-6b5b-4d12-ad68-fd34bfbfb0b3,ResourceVersion:17871754,Generation:0,CreationTimestamp:2020-06-22 14:30:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 22 14:31:04.692: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-a,UID:590a1719-6b5b-4d12-ad68-fd34bfbfb0b3,ResourceVersion:17871775,Generation:0,CreationTimestamp:2020-06-22 14:30:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 22 14:31:04.692: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-a,UID:590a1719-6b5b-4d12-ad68-fd34bfbfb0b3,ResourceVersion:17871775,Generation:0,CreationTimestamp:2020-06-22 14:30:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 22 14:31:14.700: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-b,UID:cb212ed5-af0d-4a0c-9179-d145f8038d67,ResourceVersion:17871797,Generation:0,CreationTimestamp:2020-06-22 14:31:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 14:31:14.700: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-b,UID:cb212ed5-af0d-4a0c-9179-d145f8038d67,ResourceVersion:17871797,Generation:0,CreationTimestamp:2020-06-22 14:31:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 22 14:31:24.707: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-b,UID:cb212ed5-af0d-4a0c-9179-d145f8038d67,ResourceVersion:17871818,Generation:0,CreationTimestamp:2020-06-22 14:31:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 22 14:31:24.707: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3911,SelfLink:/api/v1/namespaces/watch-3911/configmaps/e2e-watch-test-configmap-b,UID:cb212ed5-af0d-4a0c-9179-d145f8038d67,ResourceVersion:17871818,Generation:0,CreationTimestamp:2020-06-22 14:31:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:31:34.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3911" for this suite. Jun 22 14:31:40.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:31:40.809: INFO: namespace watch-3911 deletion completed in 6.096041198s • [SLOW TEST:66.242 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:31:40.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:31:40.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2142" for this suite. Jun 22 14:32:02.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:32:03.057: INFO: namespace pods-2142 deletion completed in 22.113044021s • [SLOW TEST:22.248 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:32:03.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-95f8213c-88b7-4d5a-b93a-3fad6c132473 STEP: Creating a pod to test consume secrets Jun 22 14:32:03.244: INFO: Waiting up to 5m0s for pod "pod-secrets-1c335e68-a6c1-44be-89fa-c89ef21bd03f" in namespace "secrets-1861" to be "success or failure" Jun 22 14:32:03.252: INFO: Pod "pod-secrets-1c335e68-a6c1-44be-89fa-c89ef21bd03f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.454247ms Jun 22 14:32:05.387: INFO: Pod "pod-secrets-1c335e68-a6c1-44be-89fa-c89ef21bd03f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142923388s Jun 22 14:32:07.390: INFO: Pod "pod-secrets-1c335e68-a6c1-44be-89fa-c89ef21bd03f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146499051s STEP: Saw pod success Jun 22 14:32:07.390: INFO: Pod "pod-secrets-1c335e68-a6c1-44be-89fa-c89ef21bd03f" satisfied condition "success or failure" Jun 22 14:32:07.393: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-1c335e68-a6c1-44be-89fa-c89ef21bd03f container secret-volume-test: STEP: delete the pod Jun 22 14:32:07.472: INFO: Waiting for pod pod-secrets-1c335e68-a6c1-44be-89fa-c89ef21bd03f to disappear Jun 22 14:32:07.496: INFO: Pod pod-secrets-1c335e68-a6c1-44be-89fa-c89ef21bd03f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:32:07.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1861" for this suite. Jun 22 14:32:13.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:32:13.592: INFO: namespace secrets-1861 deletion completed in 6.092713279s STEP: Destroying namespace "secret-namespace-6990" for this suite. Jun 22 14:32:19.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:32:19.741: INFO: namespace secret-namespace-6990 deletion completed in 6.148627325s • [SLOW TEST:16.683 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:32:19.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:32:25.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6373" for this suite. Jun 22 14:32:32.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:32:32.083: INFO: namespace namespaces-6373 deletion completed in 6.08944513s STEP: Destroying namespace "nsdeletetest-2185" for this suite. Jun 22 14:32:32.086: INFO: Namespace nsdeletetest-2185 was already deleted STEP: Destroying namespace "nsdeletetest-2740" for this suite. Jun 22 14:32:38.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:32:38.184: INFO: namespace nsdeletetest-2740 deletion completed in 6.098705519s • [SLOW TEST:18.443 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:32:38.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-18fbcb93-aec6-4978-9d77-fec2c6643117 STEP: Creating secret with name s-test-opt-upd-eda5217e-740c-483d-8b7c-89d2206b3c79 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-18fbcb93-aec6-4978-9d77-fec2c6643117 STEP: Updating secret s-test-opt-upd-eda5217e-740c-483d-8b7c-89d2206b3c79 STEP: Creating secret with name s-test-opt-create-5b7092d3-27ef-4c07-9e4d-b3696c8735a8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:34:08.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8883" for this suite. Jun 22 14:34:32.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:34:32.875: INFO: namespace secrets-8883 deletion completed in 24.091994729s • [SLOW TEST:114.690 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:34:32.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-bc402525-5435-40cd-b4f3-3015f2c51802 STEP: Creating a pod to test consume configMaps Jun 22 14:34:32.988: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c348247a-e38a-4387-93f7-8c18f85702fe" in namespace "projected-8021" to be "success or failure" Jun 22 14:34:32.996: INFO: Pod "pod-projected-configmaps-c348247a-e38a-4387-93f7-8c18f85702fe": Phase="Pending", Reason="", readiness=false. Elapsed: 7.97092ms Jun 22 14:34:35.000: INFO: Pod "pod-projected-configmaps-c348247a-e38a-4387-93f7-8c18f85702fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012274888s Jun 22 14:34:37.005: INFO: Pod "pod-projected-configmaps-c348247a-e38a-4387-93f7-8c18f85702fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016627434s STEP: Saw pod success Jun 22 14:34:37.005: INFO: Pod "pod-projected-configmaps-c348247a-e38a-4387-93f7-8c18f85702fe" satisfied condition "success or failure" Jun 22 14:34:37.008: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-c348247a-e38a-4387-93f7-8c18f85702fe container projected-configmap-volume-test: STEP: delete the pod Jun 22 14:34:37.041: INFO: Waiting for pod pod-projected-configmaps-c348247a-e38a-4387-93f7-8c18f85702fe to disappear Jun 22 14:34:37.057: INFO: Pod pod-projected-configmaps-c348247a-e38a-4387-93f7-8c18f85702fe no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:34:37.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8021" for this suite. Jun 22 14:34:43.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:34:43.210: INFO: namespace projected-8021 deletion completed in 6.150349102s • [SLOW TEST:10.335 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:34:43.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-b1a25d11-c576-4d82-8f31-302dbdb109c4 STEP: Creating a pod to test consume configMaps Jun 22 14:34:43.311: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8a0fbd5c-cf3a-4948-a71c-02cc8f330684" in namespace "projected-2327" to be "success or failure" Jun 22 14:34:43.322: INFO: Pod "pod-projected-configmaps-8a0fbd5c-cf3a-4948-a71c-02cc8f330684": Phase="Pending", Reason="", readiness=false. Elapsed: 10.865525ms Jun 22 14:34:45.326: INFO: Pod "pod-projected-configmaps-8a0fbd5c-cf3a-4948-a71c-02cc8f330684": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015184227s Jun 22 14:34:47.419: INFO: Pod "pod-projected-configmaps-8a0fbd5c-cf3a-4948-a71c-02cc8f330684": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10779669s STEP: Saw pod success Jun 22 14:34:47.419: INFO: Pod "pod-projected-configmaps-8a0fbd5c-cf3a-4948-a71c-02cc8f330684" satisfied condition "success or failure" Jun 22 14:34:47.422: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-8a0fbd5c-cf3a-4948-a71c-02cc8f330684 container projected-configmap-volume-test: STEP: delete the pod Jun 22 14:34:47.490: INFO: Waiting for pod pod-projected-configmaps-8a0fbd5c-cf3a-4948-a71c-02cc8f330684 to disappear Jun 22 14:34:47.605: INFO: Pod pod-projected-configmaps-8a0fbd5c-cf3a-4948-a71c-02cc8f330684 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:34:47.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2327" for this suite. Jun 22 14:34:53.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:34:53.732: INFO: namespace projected-2327 deletion completed in 6.124619176s • [SLOW TEST:10.522 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:34:53.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:34:57.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9438" for this suite. Jun 22 14:35:47.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:35:47.954: INFO: namespace kubelet-test-9438 deletion completed in 50.114025227s • [SLOW TEST:54.222 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:35:47.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-7863fdcd-02d0-4172-9e97-48bab8ef0b30 STEP: Creating a pod to test consume configMaps Jun 22 14:35:48.095: INFO: Waiting up to 5m0s for pod "pod-configmaps-4788492a-19d4-4194-b4e6-b743cb7ad0e4" in namespace "configmap-8202" to be "success or failure" Jun 22 14:35:48.149: INFO: Pod "pod-configmaps-4788492a-19d4-4194-b4e6-b743cb7ad0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 54.341353ms Jun 22 14:35:50.153: INFO: Pod "pod-configmaps-4788492a-19d4-4194-b4e6-b743cb7ad0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058513684s Jun 22 14:35:52.157: INFO: Pod "pod-configmaps-4788492a-19d4-4194-b4e6-b743cb7ad0e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062012518s STEP: Saw pod success Jun 22 14:35:52.157: INFO: Pod "pod-configmaps-4788492a-19d4-4194-b4e6-b743cb7ad0e4" satisfied condition "success or failure" Jun 22 14:35:52.160: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4788492a-19d4-4194-b4e6-b743cb7ad0e4 container configmap-volume-test: STEP: delete the pod Jun 22 14:35:52.180: INFO: Waiting for pod pod-configmaps-4788492a-19d4-4194-b4e6-b743cb7ad0e4 to disappear Jun 22 14:35:52.184: INFO: Pod pod-configmaps-4788492a-19d4-4194-b4e6-b743cb7ad0e4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:35:52.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8202" for this suite. Jun 22 14:35:58.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:35:58.290: INFO: namespace configmap-8202 deletion completed in 6.103283391s • [SLOW TEST:10.335 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:35:58.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 22 14:35:58.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-581455b9-5764-4be3-aa26-9630cac8d680" in namespace "downward-api-7168" to be "success or failure" Jun 22 14:35:58.428: INFO: Pod "downwardapi-volume-581455b9-5764-4be3-aa26-9630cac8d680": Phase="Pending", Reason="", readiness=false. Elapsed: 5.47998ms Jun 22 14:36:00.432: INFO: Pod "downwardapi-volume-581455b9-5764-4be3-aa26-9630cac8d680": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009014585s Jun 22 14:36:02.436: INFO: Pod "downwardapi-volume-581455b9-5764-4be3-aa26-9630cac8d680": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012763766s STEP: Saw pod success Jun 22 14:36:02.436: INFO: Pod "downwardapi-volume-581455b9-5764-4be3-aa26-9630cac8d680" satisfied condition "success or failure" Jun 22 14:36:02.438: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-581455b9-5764-4be3-aa26-9630cac8d680 container client-container: STEP: delete the pod Jun 22 14:36:02.480: INFO: Waiting for pod downwardapi-volume-581455b9-5764-4be3-aa26-9630cac8d680 to disappear Jun 22 14:36:02.506: INFO: Pod downwardapi-volume-581455b9-5764-4be3-aa26-9630cac8d680 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:36:02.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7168" for this suite. Jun 22 14:36:08.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:36:08.602: INFO: namespace downward-api-7168 deletion completed in 6.092969551s • [SLOW TEST:10.312 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:36:08.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jun 22 14:36:08.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 22 14:36:08.883: INFO: stderr: "" Jun 22 14:36:08.883: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:36:08.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9934" for this suite. Jun 22 14:36:14.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:36:15.022: INFO: namespace kubectl-9934 deletion completed in 6.134107497s • [SLOW TEST:6.420 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 22 14:36:15.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 22 14:36:19.085: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-3fbad1b6-c69e-4588-8a1d-c47821169eaa,GenerateName:,Namespace:events-7669,SelfLink:/api/v1/namespaces/events-7669/pods/send-events-3fbad1b6-c69e-4588-8a1d-c47821169eaa,UID:1354c324-8486-4220-9940-94aee8d36c01,ResourceVersion:17872672,Generation:0,CreationTimestamp:2020-06-22 14:36:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 55005636,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vvpz5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vvpz5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-vvpz5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257caa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257cac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:36:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:36:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:36:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-22 14:36:15 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.31,StartTime:2020-06-22 14:36:15 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-22 14:36:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://dba64aa64a52a94ad1a5d5772c38e24212dbcee202c92de52da2a8fe84f1e45d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jun 22 14:36:21.090: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 22 14:36:23.095: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 22 14:36:23.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7669" for this suite. Jun 22 14:37:03.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 22 14:37:03.298: INFO: namespace events-7669 deletion completed in 40.133853882s • [SLOW TEST:48.275 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun 22 14:37:03.299: INFO: Running AfterSuite actions on all nodes Jun 22 14:37:03.299: INFO: Running AfterSuite actions on node 1 Jun 22 14:37:03.299: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6068.329 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS