I0719 23:28:19.627158 6 e2e.go:243] Starting e2e run "d5cd44e8-f7c1-452b-8ff7-a341910ef756" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1595201298 - Will randomize all specs Will run 215 of 4413 specs Jul 19 23:28:19.825: INFO: >>> kubeConfig: /root/.kube/config Jul 19 23:28:19.829: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 19 23:28:19.853: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 19 23:28:19.881: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 19 23:28:19.881: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 19 23:28:19.881: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 19 23:28:19.887: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 19 23:28:19.887: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 19 23:28:19.887: INFO: e2e test version: v1.15.12 Jul 19 23:28:19.888: INFO: kube-apiserver version: v1.15.12 SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:28:19.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Jul 19 23:28:19.996: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-f4fe1f0d-8606-4c4c-9e2b-ba5f31e0d723 STEP: Creating a pod to test consume secrets Jul 19 23:28:20.028: INFO: Waiting up to 5m0s for pod "pod-secrets-47fe3221-2a51-495e-a3fc-c3ca92398252" in namespace "secrets-945" to be "success or failure" Jul 19 23:28:20.084: INFO: Pod "pod-secrets-47fe3221-2a51-495e-a3fc-c3ca92398252": Phase="Pending", Reason="", readiness=false. Elapsed: 56.185494ms Jul 19 23:28:22.088: INFO: Pod "pod-secrets-47fe3221-2a51-495e-a3fc-c3ca92398252": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060097963s Jul 19 23:28:24.092: INFO: Pod "pod-secrets-47fe3221-2a51-495e-a3fc-c3ca92398252": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064320898s Jul 19 23:28:26.097: INFO: Pod "pod-secrets-47fe3221-2a51-495e-a3fc-c3ca92398252": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068795211s STEP: Saw pod success Jul 19 23:28:26.097: INFO: Pod "pod-secrets-47fe3221-2a51-495e-a3fc-c3ca92398252" satisfied condition "success or failure" Jul 19 23:28:26.100: INFO: Trying to get logs from node iruya-worker pod pod-secrets-47fe3221-2a51-495e-a3fc-c3ca92398252 container secret-volume-test: STEP: delete the pod Jul 19 23:28:26.176: INFO: Waiting for pod pod-secrets-47fe3221-2a51-495e-a3fc-c3ca92398252 to disappear Jul 19 23:28:26.190: INFO: Pod pod-secrets-47fe3221-2a51-495e-a3fc-c3ca92398252 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:28:26.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-945" for this suite. Jul 19 23:28:32.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:28:32.290: INFO: namespace secrets-945 deletion completed in 6.097235739s • [SLOW TEST:12.401 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:28:32.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b8a24498-aac4-4d75-ab09-d0727f45e310 STEP: Creating configMap with name cm-test-opt-upd-c640d38c-25f8-4d26-91b3-6dfca9548c1d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b8a24498-aac4-4d75-ab09-d0727f45e310 STEP: Updating configmap cm-test-opt-upd-c640d38c-25f8-4d26-91b3-6dfca9548c1d STEP: Creating configMap with name cm-test-opt-create-16049729-0d49-4ae9-bc17-c3473b5c46ec STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:28:42.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9802" for this suite. Jul 19 23:29:06.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:29:06.632: INFO: namespace projected-9802 deletion completed in 24.105883471s • [SLOW TEST:34.342 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:29:06.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 19 23:29:06.888: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:29:08.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6630" for this suite. Jul 19 23:29:14.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:29:14.433: INFO: namespace custom-resource-definition-6630 deletion completed in 6.131232511s • [SLOW TEST:7.801 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:29:14.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 19 23:29:14.507: INFO: Creating ReplicaSet my-hostname-basic-ba808472-bf39-486c-b9e1-d06165e02df0 Jul 19 23:29:14.526: INFO: Pod name my-hostname-basic-ba808472-bf39-486c-b9e1-d06165e02df0: Found 0 pods out of 1 Jul 19 23:29:19.540: INFO: Pod name my-hostname-basic-ba808472-bf39-486c-b9e1-d06165e02df0: Found 1 pods out of 1 Jul 19 23:29:19.540: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ba808472-bf39-486c-b9e1-d06165e02df0" is running Jul 19 23:29:19.543: INFO: Pod "my-hostname-basic-ba808472-bf39-486c-b9e1-d06165e02df0-qdrqj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 23:29:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 23:29:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 23:29:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-19 23:29:14 +0000 UTC Reason: Message:}]) Jul 19 23:29:19.543: INFO: Trying to dial the pod Jul 19 23:29:24.727: INFO: Controller my-hostname-basic-ba808472-bf39-486c-b9e1-d06165e02df0: Got expected result from replica 1 [my-hostname-basic-ba808472-bf39-486c-b9e1-d06165e02df0-qdrqj]: "my-hostname-basic-ba808472-bf39-486c-b9e1-d06165e02df0-qdrqj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:29:24.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-117" for this suite. Jul 19 23:29:30.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:29:30.866: INFO: namespace replicaset-117 deletion completed in 6.135200666s • [SLOW TEST:16.432 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:29:30.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 19 23:29:35.505: INFO: Successfully updated pod "pod-update-89eec80b-27cc-47c3-8493-d4e78c30c6b0" STEP: verifying the updated pod is in kubernetes Jul 19 23:29:35.528: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:29:35.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6759" for this suite. Jul 19 23:29:57.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:29:57.636: INFO: namespace pods-6759 deletion completed in 22.103634321s • [SLOW TEST:26.769 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:29:57.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-d23a88ea-9827-4531-9194-776f7eb85f18 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:29:57.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4744" for this suite. Jul 19 23:30:03.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:30:03.957: INFO: namespace secrets-4744 deletion completed in 6.249835397s • [SLOW TEST:6.321 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:30:03.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 19 23:30:04.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3832' Jul 19 23:30:06.925: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 19 23:30:06.925: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jul 19 23:30:09.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3832' Jul 19 23:30:09.139: INFO: stderr: "" Jul 19 23:30:09.139: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:30:09.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3832" for this suite. Jul 19 23:30:31.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:30:31.247: INFO: namespace kubectl-3832 deletion completed in 22.105742335s • [SLOW TEST:27.290 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:30:31.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 19 23:30:31.293: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:30:35.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6491" for this suite. Jul 19 23:31:17.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:31:17.886: INFO: namespace pods-6491 deletion completed in 42.431165116s • [SLOW TEST:46.637 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:31:17.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:32:18.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6805" for this suite. Jul 19 23:32:40.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:32:40.194: INFO: namespace container-probe-6805 deletion completed in 22.125483409s • [SLOW TEST:82.308 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:32:40.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 19 23:32:44.326: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:32:44.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3679" for this suite. Jul 19 23:32:50.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:32:50.467: INFO: namespace container-runtime-3679 deletion completed in 6.083012243s • [SLOW TEST:10.273 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:32:50.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 19 23:33:18.576: INFO: Container started at 2020-07-19 23:32:53 +0000 UTC, pod became ready at 2020-07-19 23:33:16 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:33:18.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4651" for this suite. Jul 19 23:33:40.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:33:40.805: INFO: namespace container-probe-4651 deletion completed in 22.226110405s • [SLOW TEST:50.337 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:33:40.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 19 23:33:41.881: INFO: Waiting up to 5m0s for pod "pod-1233a7e7-ea3f-4f79-bdfa-e81bac0cc418" in namespace "emptydir-138" to be "success or failure" Jul 19 23:33:41.934: INFO: Pod "pod-1233a7e7-ea3f-4f79-bdfa-e81bac0cc418": Phase="Pending", Reason="", readiness=false. Elapsed: 52.90042ms Jul 19 23:33:43.938: INFO: Pod "pod-1233a7e7-ea3f-4f79-bdfa-e81bac0cc418": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05699311s Jul 19 23:33:45.943: INFO: Pod "pod-1233a7e7-ea3f-4f79-bdfa-e81bac0cc418": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061226955s Jul 19 23:33:47.947: INFO: Pod "pod-1233a7e7-ea3f-4f79-bdfa-e81bac0cc418": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06524297s STEP: Saw pod success Jul 19 23:33:47.947: INFO: Pod "pod-1233a7e7-ea3f-4f79-bdfa-e81bac0cc418" satisfied condition "success or failure" Jul 19 23:33:47.949: INFO: Trying to get logs from node iruya-worker pod pod-1233a7e7-ea3f-4f79-bdfa-e81bac0cc418 container test-container: STEP: delete the pod Jul 19 23:33:48.019: INFO: Waiting for pod pod-1233a7e7-ea3f-4f79-bdfa-e81bac0cc418 to disappear Jul 19 23:33:48.041: INFO: Pod pod-1233a7e7-ea3f-4f79-bdfa-e81bac0cc418 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:33:48.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-138" for this suite. Jul 19 23:33:56.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:33:57.035: INFO: namespace emptydir-138 deletion completed in 8.990372633s • [SLOW TEST:16.229 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:33:57.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-376 I0719 23:33:57.123438 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-376, replica count: 1 I0719 23:33:58.173874 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 23:33:59.174062 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 23:34:00.174267 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0719 23:34:01.174480 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 19 23:34:01.335: INFO: Created: latency-svc-kg5sp Jul 19 23:34:01.339: INFO: Got endpoints: latency-svc-kg5sp [65.14994ms] Jul 19 23:34:01.370: INFO: Created: latency-svc-7fcvk Jul 19 23:34:01.379: INFO: Got endpoints: latency-svc-7fcvk [39.857557ms] Jul 19 23:34:01.408: INFO: Created: latency-svc-bbph6 Jul 19 23:34:01.426: INFO: Got endpoints: latency-svc-bbph6 [86.170004ms] Jul 19 23:34:01.480: INFO: Created: latency-svc-hqvrr Jul 19 23:34:01.498: INFO: Got endpoints: latency-svc-hqvrr [157.819987ms] Jul 19 23:34:01.528: INFO: Created: latency-svc-n5trd Jul 19 23:34:01.542: INFO: Got endpoints: latency-svc-n5trd [202.003249ms] Jul 19 23:34:01.598: INFO: Created: latency-svc-crqvs Jul 19 23:34:01.646: INFO: Got endpoints: latency-svc-crqvs [306.091322ms] Jul 19 23:34:01.646: INFO: Created: latency-svc-rbtqv Jul 19 23:34:01.678: INFO: Got endpoints: latency-svc-rbtqv [338.861283ms] Jul 19 23:34:01.732: INFO: Created: latency-svc-rz72v Jul 19 23:34:01.734: INFO: Got endpoints: latency-svc-rz72v [394.803195ms] Jul 19 23:34:01.762: INFO: Created: latency-svc-qgszc Jul 19 23:34:01.781: INFO: Got endpoints: latency-svc-qgszc [441.090338ms] Jul 19 23:34:01.862: INFO: Created: latency-svc-6j5rm Jul 19 23:34:01.865: INFO: Got endpoints: latency-svc-6j5rm [525.637427ms] Jul 19 23:34:01.940: INFO: Created: latency-svc-h4s5b Jul 19 23:34:01.946: INFO: Got endpoints: latency-svc-h4s5b [605.828539ms] Jul 19 23:34:01.994: INFO: Created: latency-svc-fwbf5 Jul 19 23:34:01.997: INFO: Got endpoints: latency-svc-fwbf5 [657.486464ms] Jul 19 23:34:02.038: INFO: Created: latency-svc-tc8mw Jul 19 23:34:02.054: INFO: Got endpoints: latency-svc-tc8mw [714.22704ms] Jul 19 23:34:02.168: INFO: Created: latency-svc-hvq4f Jul 19 23:34:02.177: INFO: Got endpoints: latency-svc-hvq4f [837.456221ms] Jul 19 23:34:02.210: INFO: Created: latency-svc-22xq4 Jul 19 23:34:02.225: INFO: Got endpoints: latency-svc-22xq4 [885.703448ms] Jul 19 23:34:02.300: INFO: Created: latency-svc-28zw7 Jul 19 23:34:02.349: INFO: Got endpoints: latency-svc-28zw7 [1.009398574s] Jul 19 23:34:02.351: INFO: Created: latency-svc-k85xj Jul 19 23:34:02.385: INFO: Got endpoints: latency-svc-k85xj [1.00612354s] Jul 19 23:34:02.449: INFO: Created: latency-svc-k7hr4 Jul 19 23:34:02.455: INFO: Got endpoints: latency-svc-k7hr4 [1.029181026s] Jul 19 23:34:02.498: INFO: Created: latency-svc-lq4lb Jul 19 23:34:02.532: INFO: Got endpoints: latency-svc-lq4lb [1.034156706s] Jul 19 23:34:02.647: INFO: Created: latency-svc-q42h4 Jul 19 23:34:02.652: INFO: Got endpoints: latency-svc-q42h4 [1.110419998s] Jul 19 23:34:02.679: INFO: Created: latency-svc-sb2tl Jul 19 23:34:02.688: INFO: Got endpoints: latency-svc-sb2tl [1.041949864s] Jul 19 23:34:02.719: INFO: Created: latency-svc-cpkzq Jul 19 23:34:02.743: INFO: Got endpoints: latency-svc-cpkzq [1.064284009s] Jul 19 23:34:02.802: INFO: Created: latency-svc-l5tvk Jul 19 23:34:02.808: INFO: Got endpoints: latency-svc-l5tvk [1.073944466s] Jul 19 23:34:02.836: INFO: Created: latency-svc-fp27c Jul 19 23:34:02.863: INFO: Got endpoints: latency-svc-fp27c [1.081876022s] Jul 19 23:34:02.952: INFO: Created: latency-svc-7v297 Jul 19 23:34:02.956: INFO: Got endpoints: latency-svc-7v297 [1.090986688s] Jul 19 23:34:03.019: INFO: Created: latency-svc-pfxzr Jul 19 23:34:03.037: INFO: Got endpoints: latency-svc-pfxzr [1.091855859s] Jul 19 23:34:03.090: INFO: Created: latency-svc-29hsd Jul 19 23:34:03.093: INFO: Got endpoints: latency-svc-29hsd [1.096108815s] Jul 19 23:34:03.117: INFO: Created: latency-svc-zh8vd Jul 19 23:34:03.134: INFO: Got endpoints: latency-svc-zh8vd [1.079849978s] Jul 19 23:34:03.159: INFO: Created: latency-svc-76t69 Jul 19 23:34:03.170: INFO: Got endpoints: latency-svc-76t69 [992.398811ms] Jul 19 23:34:03.251: INFO: Created: latency-svc-z8mvr Jul 19 23:34:03.254: INFO: Got endpoints: latency-svc-z8mvr [1.028456316s] Jul 19 23:34:03.284: INFO: Created: latency-svc-52kn8 Jul 19 23:34:03.302: INFO: Got endpoints: latency-svc-52kn8 [952.938269ms] Jul 19 23:34:03.326: INFO: Created: latency-svc-w2tdq Jul 19 23:34:03.345: INFO: Got endpoints: latency-svc-w2tdq [959.121889ms] Jul 19 23:34:03.397: INFO: Created: latency-svc-m968r Jul 19 23:34:03.398: INFO: Got endpoints: latency-svc-m968r [942.976287ms] Jul 19 23:34:03.429: INFO: Created: latency-svc-m9rj9 Jul 19 23:34:03.460: INFO: Got endpoints: latency-svc-m9rj9 [927.377793ms] Jul 19 23:34:03.483: INFO: Created: latency-svc-c8dq8 Jul 19 23:34:03.544: INFO: Got endpoints: latency-svc-c8dq8 [891.748018ms] Jul 19 23:34:03.565: INFO: Created: latency-svc-l99d9 Jul 19 23:34:03.579: INFO: Got endpoints: latency-svc-l99d9 [891.128221ms] Jul 19 23:34:03.601: INFO: Created: latency-svc-drk67 Jul 19 23:34:03.615: INFO: Got endpoints: latency-svc-drk67 [872.564714ms] Jul 19 23:34:03.701: INFO: Created: latency-svc-t6gcx Jul 19 23:34:03.706: INFO: Got endpoints: latency-svc-t6gcx [897.047087ms] Jul 19 23:34:03.741: INFO: Created: latency-svc-cvvvm Jul 19 23:34:03.761: INFO: Got endpoints: latency-svc-cvvvm [897.508524ms] Jul 19 23:34:03.905: INFO: Created: latency-svc-tpsmv Jul 19 23:34:03.910: INFO: Got endpoints: latency-svc-tpsmv [953.286199ms] Jul 19 23:34:03.970: INFO: Created: latency-svc-n8fns Jul 19 23:34:04.041: INFO: Got endpoints: latency-svc-n8fns [1.003895337s] Jul 19 23:34:04.057: INFO: Created: latency-svc-b5nhs Jul 19 23:34:04.078: INFO: Got endpoints: latency-svc-b5nhs [984.949523ms] Jul 19 23:34:04.191: INFO: Created: latency-svc-trh7k Jul 19 23:34:04.217: INFO: Got endpoints: latency-svc-trh7k [1.0824644s] Jul 19 23:34:04.249: INFO: Created: latency-svc-mwcc6 Jul 19 23:34:04.271: INFO: Got endpoints: latency-svc-mwcc6 [1.100582019s] Jul 19 23:34:04.327: INFO: Created: latency-svc-2kprz Jul 19 23:34:04.355: INFO: Got endpoints: latency-svc-2kprz [1.100828391s] Jul 19 23:34:04.389: INFO: Created: latency-svc-4wjfk Jul 19 23:34:04.397: INFO: Got endpoints: latency-svc-4wjfk [1.094565818s] Jul 19 23:34:04.455: INFO: Created: latency-svc-s9wzj Jul 19 23:34:04.458: INFO: Got endpoints: latency-svc-s9wzj [1.113492992s] Jul 19 23:34:04.507: INFO: Created: latency-svc-9p5mv Jul 19 23:34:04.524: INFO: Got endpoints: latency-svc-9p5mv [1.125707224s] Jul 19 23:34:04.598: INFO: Created: latency-svc-8lg5c Jul 19 23:34:04.602: INFO: Got endpoints: latency-svc-8lg5c [1.142291204s] Jul 19 23:34:04.635: INFO: Created: latency-svc-kvwwb Jul 19 23:34:04.650: INFO: Got endpoints: latency-svc-kvwwb [1.105699148s] Jul 19 23:34:04.742: INFO: Created: latency-svc-d4wpr Jul 19 23:34:04.746: INFO: Got endpoints: latency-svc-d4wpr [1.166439596s] Jul 19 23:34:04.783: INFO: Created: latency-svc-ft5x5 Jul 19 23:34:04.825: INFO: Got endpoints: latency-svc-ft5x5 [1.209986331s] Jul 19 23:34:04.893: INFO: Created: latency-svc-5wdnx Jul 19 23:34:04.895: INFO: Got endpoints: latency-svc-5wdnx [1.189228167s] Jul 19 23:34:04.923: INFO: Created: latency-svc-xmxpk Jul 19 23:34:04.941: INFO: Got endpoints: latency-svc-xmxpk [1.18043231s] Jul 19 23:34:04.964: INFO: Created: latency-svc-vsmxr Jul 19 23:34:04.984: INFO: Got endpoints: latency-svc-vsmxr [1.073736822s] Jul 19 23:34:05.035: INFO: Created: latency-svc-qzgp9 Jul 19 23:34:05.062: INFO: Got endpoints: latency-svc-qzgp9 [1.020881863s] Jul 19 23:34:05.085: INFO: Created: latency-svc-6gccm Jul 19 23:34:05.119: INFO: Got endpoints: latency-svc-6gccm [1.040849359s] Jul 19 23:34:05.185: INFO: Created: latency-svc-jzlt4 Jul 19 23:34:05.188: INFO: Got endpoints: latency-svc-jzlt4 [971.21762ms] Jul 19 23:34:05.252: INFO: Created: latency-svc-xq4gb Jul 19 23:34:05.266: INFO: Got endpoints: latency-svc-xq4gb [995.629542ms] Jul 19 23:34:05.323: INFO: Created: latency-svc-wb2nl Jul 19 23:34:05.328: INFO: Got endpoints: latency-svc-wb2nl [973.382989ms] Jul 19 23:34:05.354: INFO: Created: latency-svc-jqjpw Jul 19 23:34:05.369: INFO: Got endpoints: latency-svc-jqjpw [971.729119ms] Jul 19 23:34:05.388: INFO: Created: latency-svc-npj9k Jul 19 23:34:05.399: INFO: Got endpoints: latency-svc-npj9k [940.493941ms] Jul 19 23:34:05.498: INFO: Created: latency-svc-6kzfd Jul 19 23:34:05.500: INFO: Got endpoints: latency-svc-6kzfd [975.861292ms] Jul 19 23:34:05.558: INFO: Created: latency-svc-pcjdb Jul 19 23:34:05.573: INFO: Got endpoints: latency-svc-pcjdb [971.325702ms] Jul 19 23:34:05.594: INFO: Created: latency-svc-dtwlm Jul 19 23:34:05.652: INFO: Got endpoints: latency-svc-dtwlm [1.001864514s] Jul 19 23:34:05.653: INFO: Created: latency-svc-5txbf Jul 19 23:34:05.663: INFO: Got endpoints: latency-svc-5txbf [917.588582ms] Jul 19 23:34:05.706: INFO: Created: latency-svc-2trn7 Jul 19 23:34:05.724: INFO: Got endpoints: latency-svc-2trn7 [898.815761ms] Jul 19 23:34:05.778: INFO: Created: latency-svc-9cklv Jul 19 23:34:05.782: INFO: Got endpoints: latency-svc-9cklv [886.845736ms] Jul 19 23:34:05.840: INFO: Created: latency-svc-7g8q8 Jul 19 23:34:05.862: INFO: Got endpoints: latency-svc-7g8q8 [921.352608ms] Jul 19 23:34:05.963: INFO: Created: latency-svc-z79lr Jul 19 23:34:05.970: INFO: Got endpoints: latency-svc-z79lr [986.748316ms] Jul 19 23:34:05.994: INFO: Created: latency-svc-282x7 Jul 19 23:34:06.007: INFO: Got endpoints: latency-svc-282x7 [944.146987ms] Jul 19 23:34:06.048: INFO: Created: latency-svc-sps6b Jul 19 23:34:06.061: INFO: Got endpoints: latency-svc-sps6b [941.697071ms] Jul 19 23:34:06.125: INFO: Created: latency-svc-g6j92 Jul 19 23:34:06.128: INFO: Got endpoints: latency-svc-g6j92 [940.102019ms] Jul 19 23:34:06.206: INFO: Created: latency-svc-bztk7 Jul 19 23:34:06.281: INFO: Got endpoints: latency-svc-bztk7 [1.014463131s] Jul 19 23:34:06.302: INFO: Created: latency-svc-g72w8 Jul 19 23:34:06.338: INFO: Got endpoints: latency-svc-g72w8 [1.009767048s] Jul 19 23:34:06.419: INFO: Created: latency-svc-vw8q8 Jul 19 23:34:06.444: INFO: Got endpoints: latency-svc-vw8q8 [1.074840075s] Jul 19 23:34:06.498: INFO: Created: latency-svc-fwr5s Jul 19 23:34:06.562: INFO: Got endpoints: latency-svc-fwr5s [1.163136123s] Jul 19 23:34:06.584: INFO: Created: latency-svc-kbz4w Jul 19 23:34:06.608: INFO: Got endpoints: latency-svc-kbz4w [1.108477753s] Jul 19 23:34:06.632: INFO: Created: latency-svc-45nhk Jul 19 23:34:06.650: INFO: Got endpoints: latency-svc-45nhk [1.077099445s] Jul 19 23:34:06.706: INFO: Created: latency-svc-pc9pv Jul 19 23:34:06.714: INFO: Got endpoints: latency-svc-pc9pv [1.062168595s] Jul 19 23:34:06.756: INFO: Created: latency-svc-6bf8n Jul 19 23:34:06.768: INFO: Got endpoints: latency-svc-6bf8n [1.104457779s] Jul 19 23:34:06.799: INFO: Created: latency-svc-jpl66 Jul 19 23:34:06.855: INFO: Got endpoints: latency-svc-jpl66 [1.130663813s] Jul 19 23:34:06.857: INFO: Created: latency-svc-f7xkb Jul 19 23:34:06.873: INFO: Got endpoints: latency-svc-f7xkb [1.091383714s] Jul 19 23:34:06.936: INFO: Created: latency-svc-l8fff Jul 19 23:34:06.953: INFO: Got endpoints: latency-svc-l8fff [1.090270407s] Jul 19 23:34:07.026: INFO: Created: latency-svc-r4xsq Jul 19 23:34:07.036: INFO: Got endpoints: latency-svc-r4xsq [1.065195954s] Jul 19 23:34:07.057: INFO: Created: latency-svc-54mf6 Jul 19 23:34:07.072: INFO: Got endpoints: latency-svc-54mf6 [1.065202546s] Jul 19 23:34:07.093: INFO: Created: latency-svc-w5xc6 Jul 19 23:34:07.125: INFO: Got endpoints: latency-svc-w5xc6 [1.064288595s] Jul 19 23:34:07.136: INFO: Created: latency-svc-m285x Jul 19 23:34:07.164: INFO: Got endpoints: latency-svc-m285x [1.035829409s] Jul 19 23:34:07.200: INFO: Created: latency-svc-42pj6 Jul 19 23:34:07.210: INFO: Got endpoints: latency-svc-42pj6 [929.353069ms] Jul 19 23:34:07.264: INFO: Created: latency-svc-67p9b Jul 19 23:34:07.266: INFO: Got endpoints: latency-svc-67p9b [927.598248ms] Jul 19 23:34:07.328: INFO: Created: latency-svc-2kzqw Jul 19 23:34:07.345: INFO: Got endpoints: latency-svc-2kzqw [900.907624ms] Jul 19 23:34:07.401: INFO: Created: latency-svc-dqlwc Jul 19 23:34:07.404: INFO: Got endpoints: latency-svc-dqlwc [842.378746ms] Jul 19 23:34:07.434: INFO: Created: latency-svc-fx79g Jul 19 23:34:07.452: INFO: Got endpoints: latency-svc-fx79g [843.383257ms] Jul 19 23:34:07.476: INFO: Created: latency-svc-4b2nj Jul 19 23:34:07.488: INFO: Got endpoints: latency-svc-4b2nj [837.284483ms] Jul 19 23:34:07.587: INFO: Created: latency-svc-bx4h2 Jul 19 23:34:07.622: INFO: Got endpoints: latency-svc-bx4h2 [907.68585ms] Jul 19 23:34:07.623: INFO: Created: latency-svc-pztll Jul 19 23:34:07.638: INFO: Got endpoints: latency-svc-pztll [870.201422ms] Jul 19 23:34:07.670: INFO: Created: latency-svc-qdn95 Jul 19 23:34:07.766: INFO: Got endpoints: latency-svc-qdn95 [910.645848ms] Jul 19 23:34:07.767: INFO: Created: latency-svc-r9qft Jul 19 23:34:07.770: INFO: Got endpoints: latency-svc-r9qft [896.909937ms] Jul 19 23:34:07.814: INFO: Created: latency-svc-2rpm6 Jul 19 23:34:07.837: INFO: Got endpoints: latency-svc-2rpm6 [884.400701ms] Jul 19 23:34:07.922: INFO: Created: latency-svc-krhkh Jul 19 23:34:07.924: INFO: Got endpoints: latency-svc-krhkh [888.786766ms] Jul 19 23:34:07.997: INFO: Created: latency-svc-r4f2n Jul 19 23:34:08.000: INFO: Got endpoints: latency-svc-r4f2n [928.083263ms] Jul 19 23:34:08.072: INFO: Created: latency-svc-s6kf4 Jul 19 23:34:08.090: INFO: Got endpoints: latency-svc-s6kf4 [964.221631ms] Jul 19 23:34:08.114: INFO: Created: latency-svc-hn4x7 Jul 19 23:34:08.119: INFO: Got endpoints: latency-svc-hn4x7 [955.268931ms] Jul 19 23:34:08.166: INFO: Created: latency-svc-d6stm Jul 19 23:34:08.215: INFO: Got endpoints: latency-svc-d6stm [1.004522597s] Jul 19 23:34:08.217: INFO: Created: latency-svc-sxb8q Jul 19 23:34:08.222: INFO: Got endpoints: latency-svc-sxb8q [955.79358ms] Jul 19 23:34:08.245: INFO: Created: latency-svc-hj86m Jul 19 23:34:08.275: INFO: Got endpoints: latency-svc-hj86m [930.142895ms] Jul 19 23:34:08.395: INFO: Created: latency-svc-zvbk2 Jul 19 23:34:08.399: INFO: Got endpoints: latency-svc-zvbk2 [994.265355ms] Jul 19 23:34:08.432: INFO: Created: latency-svc-l52kk Jul 19 23:34:08.451: INFO: Got endpoints: latency-svc-l52kk [999.063082ms] Jul 19 23:34:08.490: INFO: Created: latency-svc-tkjtr Jul 19 23:34:08.562: INFO: Got endpoints: latency-svc-tkjtr [1.07427947s] Jul 19 23:34:08.565: INFO: Created: latency-svc-48l7q Jul 19 23:34:08.571: INFO: Got endpoints: latency-svc-48l7q [948.692202ms] Jul 19 23:34:08.593: INFO: Created: latency-svc-kfnh4 Jul 19 23:34:08.607: INFO: Got endpoints: latency-svc-kfnh4 [969.052702ms] Jul 19 23:34:08.653: INFO: Created: latency-svc-xqdcw Jul 19 23:34:08.724: INFO: Got endpoints: latency-svc-xqdcw [957.91306ms] Jul 19 23:34:08.748: INFO: Created: latency-svc-hcdvd Jul 19 23:34:08.776: INFO: Got endpoints: latency-svc-hcdvd [1.00556203s] Jul 19 23:34:08.874: INFO: Created: latency-svc-7npxl Jul 19 23:34:08.877: INFO: Got endpoints: latency-svc-7npxl [1.039668011s] Jul 19 23:34:08.905: INFO: Created: latency-svc-5bz2w Jul 19 23:34:08.920: INFO: Got endpoints: latency-svc-5bz2w [995.498051ms] Jul 19 23:34:08.941: INFO: Created: latency-svc-4tm9c Jul 19 23:34:08.956: INFO: Got endpoints: latency-svc-4tm9c [956.253489ms] Jul 19 23:34:09.036: INFO: Created: latency-svc-m6nk8 Jul 19 23:34:09.038: INFO: Got endpoints: latency-svc-m6nk8 [948.48039ms] Jul 19 23:34:09.115: INFO: Created: latency-svc-hnrh8 Jul 19 23:34:09.167: INFO: Got endpoints: latency-svc-hnrh8 [1.047354696s] Jul 19 23:34:09.180: INFO: Created: latency-svc-jsxp7 Jul 19 23:34:09.197: INFO: Got endpoints: latency-svc-jsxp7 [982.469033ms] Jul 19 23:34:09.234: INFO: Created: latency-svc-wclgz Jul 19 23:34:09.245: INFO: Got endpoints: latency-svc-wclgz [1.023480426s] Jul 19 23:34:09.302: INFO: Created: latency-svc-v7nhh Jul 19 23:34:09.306: INFO: Got endpoints: latency-svc-v7nhh [1.030725783s] Jul 19 23:34:09.349: INFO: Created: latency-svc-fzlxg Jul 19 23:34:09.366: INFO: Got endpoints: latency-svc-fzlxg [966.801006ms] Jul 19 23:34:09.385: INFO: Created: latency-svc-msrzp Jul 19 23:34:09.430: INFO: Got endpoints: latency-svc-msrzp [979.573867ms] Jul 19 23:34:09.449: INFO: Created: latency-svc-2hwr8 Jul 19 23:34:09.487: INFO: Got endpoints: latency-svc-2hwr8 [924.313598ms] Jul 19 23:34:09.593: INFO: Created: latency-svc-qxqkl Jul 19 23:34:09.596: INFO: Got endpoints: latency-svc-qxqkl [1.025493792s] Jul 19 23:34:09.625: INFO: Created: latency-svc-bvj8t Jul 19 23:34:09.642: INFO: Got endpoints: latency-svc-bvj8t [1.034904889s] Jul 19 23:34:09.742: INFO: Created: latency-svc-cvpxb Jul 19 23:34:09.773: INFO: Got endpoints: latency-svc-cvpxb [1.049484123s] Jul 19 23:34:09.776: INFO: Created: latency-svc-88lwn Jul 19 23:34:09.786: INFO: Got endpoints: latency-svc-88lwn [1.010506996s] Jul 19 23:34:09.829: INFO: Created: latency-svc-8lxhd Jul 19 23:34:09.868: INFO: Got endpoints: latency-svc-8lxhd [990.526017ms] Jul 19 23:34:09.883: INFO: Created: latency-svc-vlkw6 Jul 19 23:34:09.901: INFO: Got endpoints: latency-svc-vlkw6 [981.39031ms] Jul 19 23:34:09.965: INFO: Created: latency-svc-d244d Jul 19 23:34:10.335: INFO: Got endpoints: latency-svc-d244d [1.378617274s] Jul 19 23:34:10.357: INFO: Created: latency-svc-j7v4x Jul 19 23:34:10.417: INFO: Got endpoints: latency-svc-j7v4x [1.378719635s] Jul 19 23:34:10.603: INFO: Created: latency-svc-889kq Jul 19 23:34:10.621: INFO: Got endpoints: latency-svc-889kq [1.454159001s] Jul 19 23:34:10.736: INFO: Created: latency-svc-4gcx7 Jul 19 23:34:10.739: INFO: Got endpoints: latency-svc-4gcx7 [1.541503875s] Jul 19 23:34:10.813: INFO: Created: latency-svc-jp74k Jul 19 23:34:10.819: INFO: Got endpoints: latency-svc-jp74k [1.573856215s] Jul 19 23:34:10.874: INFO: Created: latency-svc-wkwkb Jul 19 23:34:10.915: INFO: Got endpoints: latency-svc-wkwkb [1.609624334s] Jul 19 23:34:10.939: INFO: Created: latency-svc-27m49 Jul 19 23:34:10.945: INFO: Got endpoints: latency-svc-27m49 [1.579730163s] Jul 19 23:34:10.970: INFO: Created: latency-svc-9vgxl Jul 19 23:34:11.017: INFO: Got endpoints: latency-svc-9vgxl [1.586811865s] Jul 19 23:34:11.022: INFO: Created: latency-svc-89dbb Jul 19 23:34:11.046: INFO: Got endpoints: latency-svc-89dbb [1.559178964s] Jul 19 23:34:11.076: INFO: Created: latency-svc-dgsr9 Jul 19 23:34:11.084: INFO: Got endpoints: latency-svc-dgsr9 [1.487903961s] Jul 19 23:34:11.107: INFO: Created: latency-svc-22m9c Jul 19 23:34:11.167: INFO: Got endpoints: latency-svc-22m9c [1.524444317s] Jul 19 23:34:11.169: INFO: Created: latency-svc-4vmrr Jul 19 23:34:11.175: INFO: Got endpoints: latency-svc-4vmrr [1.401439437s] Jul 19 23:34:11.202: INFO: Created: latency-svc-bhsml Jul 19 23:34:11.217: INFO: Got endpoints: latency-svc-bhsml [1.430855107s] Jul 19 23:34:11.243: INFO: Created: latency-svc-llpkz Jul 19 23:34:11.304: INFO: Got endpoints: latency-svc-llpkz [1.436850149s] Jul 19 23:34:11.317: INFO: Created: latency-svc-6trcf Jul 19 23:34:11.334: INFO: Got endpoints: latency-svc-6trcf [1.432549272s] Jul 19 23:34:11.367: INFO: Created: latency-svc-dggkj Jul 19 23:34:11.382: INFO: Got endpoints: latency-svc-dggkj [1.046444885s] Jul 19 23:34:11.462: INFO: Created: latency-svc-48lg4 Jul 19 23:34:11.468: INFO: Got endpoints: latency-svc-48lg4 [1.051473858s] Jul 19 23:34:11.792: INFO: Created: latency-svc-75xpl Jul 19 23:34:11.891: INFO: Got endpoints: latency-svc-75xpl [1.270516196s] Jul 19 23:34:11.927: INFO: Created: latency-svc-4x96x Jul 19 23:34:11.951: INFO: Got endpoints: latency-svc-4x96x [1.212435767s] Jul 19 23:34:11.977: INFO: Created: latency-svc-nxhbp Jul 19 23:34:12.047: INFO: Got endpoints: latency-svc-nxhbp [1.227782193s] Jul 19 23:34:12.084: INFO: Created: latency-svc-9hxnn Jul 19 23:34:12.102: INFO: Got endpoints: latency-svc-9hxnn [1.186371667s] Jul 19 23:34:12.125: INFO: Created: latency-svc-vprxf Jul 19 23:34:12.144: INFO: Got endpoints: latency-svc-vprxf [1.198559833s] Jul 19 23:34:12.191: INFO: Created: latency-svc-9fvc8 Jul 19 23:34:12.198: INFO: Got endpoints: latency-svc-9fvc8 [1.180596411s] Jul 19 23:34:12.229: INFO: Created: latency-svc-sp7cn Jul 19 23:34:12.252: INFO: Got endpoints: latency-svc-sp7cn [1.206623479s] Jul 19 23:34:12.281: INFO: Created: latency-svc-5xg5j Jul 19 23:34:12.359: INFO: Got endpoints: latency-svc-5xg5j [1.274587176s] Jul 19 23:34:12.361: INFO: Created: latency-svc-w5txx Jul 19 23:34:12.366: INFO: Got endpoints: latency-svc-w5txx [1.199657961s] Jul 19 23:34:12.398: INFO: Created: latency-svc-lh6kr Jul 19 23:34:12.422: INFO: Got endpoints: latency-svc-lh6kr [1.246818882s] Jul 19 23:34:12.503: INFO: Created: latency-svc-fglmq Jul 19 23:34:12.507: INFO: Got endpoints: latency-svc-fglmq [1.289754191s] Jul 19 23:34:12.571: INFO: Created: latency-svc-nb6nx Jul 19 23:34:12.589: INFO: Got endpoints: latency-svc-nb6nx [1.284698052s] Jul 19 23:34:12.640: INFO: Created: latency-svc-h2vwl Jul 19 23:34:12.645: INFO: Got endpoints: latency-svc-h2vwl [1.310878515s] Jul 19 23:34:12.689: INFO: Created: latency-svc-x5vqn Jul 19 23:34:12.704: INFO: Got endpoints: latency-svc-x5vqn [1.322451432s] Jul 19 23:34:12.779: INFO: Created: latency-svc-8tkmc Jul 19 23:34:12.806: INFO: Got endpoints: latency-svc-8tkmc [1.337646319s] Jul 19 23:34:12.841: INFO: Created: latency-svc-k5k5j Jul 19 23:34:12.848: INFO: Got endpoints: latency-svc-k5k5j [956.405635ms] Jul 19 23:34:12.955: INFO: Created: latency-svc-k8nwc Jul 19 23:34:12.974: INFO: Got endpoints: latency-svc-k8nwc [1.022714228s] Jul 19 23:34:13.003: INFO: Created: latency-svc-gb9kh Jul 19 23:34:13.059: INFO: Got endpoints: latency-svc-gb9kh [1.012417989s] Jul 19 23:34:13.067: INFO: Created: latency-svc-l5gmw Jul 19 23:34:13.082: INFO: Got endpoints: latency-svc-l5gmw [980.625908ms] Jul 19 23:34:13.129: INFO: Created: latency-svc-skvtf Jul 19 23:34:13.143: INFO: Got endpoints: latency-svc-skvtf [998.830433ms] Jul 19 23:34:13.222: INFO: Created: latency-svc-kcbhr Jul 19 23:34:13.228: INFO: Got endpoints: latency-svc-kcbhr [1.030178511s] Jul 19 23:34:13.259: INFO: Created: latency-svc-lfb65 Jul 19 23:34:13.275: INFO: Got endpoints: latency-svc-lfb65 [1.022689302s] Jul 19 23:34:13.311: INFO: Created: latency-svc-s262z Jul 19 23:34:13.382: INFO: Got endpoints: latency-svc-s262z [1.023619449s] Jul 19 23:34:13.390: INFO: Created: latency-svc-gcskm Jul 19 23:34:13.396: INFO: Got endpoints: latency-svc-gcskm [1.029408839s] Jul 19 23:34:13.423: INFO: Created: latency-svc-r4dqb Jul 19 23:34:13.431: INFO: Got endpoints: latency-svc-r4dqb [1.009594869s] Jul 19 23:34:13.459: INFO: Created: latency-svc-9l79b Jul 19 23:34:13.658: INFO: Got endpoints: latency-svc-9l79b [1.151299375s] Jul 19 23:34:13.892: INFO: Created: latency-svc-8gjs2 Jul 19 23:34:14.129: INFO: Got endpoints: latency-svc-8gjs2 [1.539713876s] Jul 19 23:34:14.299: INFO: Created: latency-svc-p2tw2 Jul 19 23:34:14.304: INFO: Got endpoints: latency-svc-p2tw2 [1.659326561s] Jul 19 23:34:14.390: INFO: Created: latency-svc-85trf Jul 19 23:34:14.502: INFO: Got endpoints: latency-svc-85trf [1.798102252s] Jul 19 23:34:14.509: INFO: Created: latency-svc-dfkpq Jul 19 23:34:14.517: INFO: Got endpoints: latency-svc-dfkpq [1.710809544s] Jul 19 23:34:14.575: INFO: Created: latency-svc-k9t58 Jul 19 23:34:14.583: INFO: Got endpoints: latency-svc-k9t58 [1.73529227s] Jul 19 23:34:14.634: INFO: Created: latency-svc-cwndr Jul 19 23:34:14.637: INFO: Got endpoints: latency-svc-cwndr [1.663072604s] Jul 19 23:34:14.666: INFO: Created: latency-svc-hb7c7 Jul 19 23:34:14.694: INFO: Got endpoints: latency-svc-hb7c7 [1.634711309s] Jul 19 23:34:14.784: INFO: Created: latency-svc-fqxbs Jul 19 23:34:14.788: INFO: Got endpoints: latency-svc-fqxbs [1.705883101s] Jul 19 23:34:14.826: INFO: Created: latency-svc-47sgx Jul 19 23:34:14.848: INFO: Got endpoints: latency-svc-47sgx [1.70542527s] Jul 19 23:34:14.958: INFO: Created: latency-svc-btsbx Jul 19 23:34:14.960: INFO: Got endpoints: latency-svc-btsbx [1.732007543s] Jul 19 23:34:15.014: INFO: Created: latency-svc-fwdrn Jul 19 23:34:15.028: INFO: Got endpoints: latency-svc-fwdrn [1.752975357s] Jul 19 23:34:15.108: INFO: Created: latency-svc-zkcxj Jul 19 23:34:15.138: INFO: Created: latency-svc-gzzfw Jul 19 23:34:15.138: INFO: Got endpoints: latency-svc-zkcxj [1.75558841s] Jul 19 23:34:15.168: INFO: Got endpoints: latency-svc-gzzfw [1.771855965s] Jul 19 23:34:15.204: INFO: Created: latency-svc-smrlf Jul 19 23:34:15.233: INFO: Got endpoints: latency-svc-smrlf [1.801853782s] Jul 19 23:34:15.260: INFO: Created: latency-svc-6c2s5 Jul 19 23:34:15.269: INFO: Got endpoints: latency-svc-6c2s5 [1.610378448s] Jul 19 23:34:15.302: INFO: Created: latency-svc-2vcbd Jul 19 23:34:15.311: INFO: Got endpoints: latency-svc-2vcbd [1.181989672s] Jul 19 23:34:15.367: INFO: Created: latency-svc-v9dnr Jul 19 23:34:15.390: INFO: Got endpoints: latency-svc-v9dnr [1.085347871s] Jul 19 23:34:15.391: INFO: Created: latency-svc-zl6m6 Jul 19 23:34:15.408: INFO: Got endpoints: latency-svc-zl6m6 [905.488221ms] Jul 19 23:34:15.439: INFO: Created: latency-svc-njkw6 Jul 19 23:34:15.457: INFO: Got endpoints: latency-svc-njkw6 [939.950663ms] Jul 19 23:34:15.520: INFO: Created: latency-svc-zdqmh Jul 19 23:34:15.528: INFO: Got endpoints: latency-svc-zdqmh [944.727552ms] Jul 19 23:34:15.578: INFO: Created: latency-svc-6sft4 Jul 19 23:34:15.688: INFO: Got endpoints: latency-svc-6sft4 [1.0507667s] Jul 19 23:34:15.690: INFO: Created: latency-svc-hg9xl Jul 19 23:34:15.733: INFO: Got endpoints: latency-svc-hg9xl [1.03882613s] Jul 19 23:34:15.775: INFO: Created: latency-svc-5nqvh Jul 19 23:34:15.787: INFO: Got endpoints: latency-svc-5nqvh [998.358653ms] Jul 19 23:34:15.862: INFO: Created: latency-svc-dxbmd Jul 19 23:34:15.865: INFO: Got endpoints: latency-svc-dxbmd [1.016205806s] Jul 19 23:34:15.894: INFO: Created: latency-svc-65mqx Jul 19 23:34:15.913: INFO: Got endpoints: latency-svc-65mqx [953.130573ms] Jul 19 23:34:15.948: INFO: Created: latency-svc-hrgz9 Jul 19 23:34:16.030: INFO: Got endpoints: latency-svc-hrgz9 [1.001294184s] Jul 19 23:34:16.031: INFO: Created: latency-svc-dbdk7 Jul 19 23:34:16.035: INFO: Got endpoints: latency-svc-dbdk7 [896.886225ms] Jul 19 23:34:16.070: INFO: Created: latency-svc-7bvdq Jul 19 23:34:16.082: INFO: Got endpoints: latency-svc-7bvdq [913.601033ms] Jul 19 23:34:16.082: INFO: Latencies: [39.857557ms 86.170004ms 157.819987ms 202.003249ms 306.091322ms 338.861283ms 394.803195ms 441.090338ms 525.637427ms 605.828539ms 657.486464ms 714.22704ms 837.284483ms 837.456221ms 842.378746ms 843.383257ms 870.201422ms 872.564714ms 884.400701ms 885.703448ms 886.845736ms 888.786766ms 891.128221ms 891.748018ms 896.886225ms 896.909937ms 897.047087ms 897.508524ms 898.815761ms 900.907624ms 905.488221ms 907.68585ms 910.645848ms 913.601033ms 917.588582ms 921.352608ms 924.313598ms 927.377793ms 927.598248ms 928.083263ms 929.353069ms 930.142895ms 939.950663ms 940.102019ms 940.493941ms 941.697071ms 942.976287ms 944.146987ms 944.727552ms 948.48039ms 948.692202ms 952.938269ms 953.130573ms 953.286199ms 955.268931ms 955.79358ms 956.253489ms 956.405635ms 957.91306ms 959.121889ms 964.221631ms 966.801006ms 969.052702ms 971.21762ms 971.325702ms 971.729119ms 973.382989ms 975.861292ms 979.573867ms 980.625908ms 981.39031ms 982.469033ms 984.949523ms 986.748316ms 990.526017ms 992.398811ms 994.265355ms 995.498051ms 995.629542ms 998.358653ms 998.830433ms 999.063082ms 1.001294184s 1.001864514s 1.003895337s 1.004522597s 1.00556203s 1.00612354s 1.009398574s 1.009594869s 1.009767048s 1.010506996s 1.012417989s 1.014463131s 1.016205806s 1.020881863s 1.022689302s 1.022714228s 1.023480426s 1.023619449s 1.025493792s 1.028456316s 1.029181026s 1.029408839s 1.030178511s 1.030725783s 1.034156706s 1.034904889s 1.035829409s 1.03882613s 1.039668011s 1.040849359s 1.041949864s 1.046444885s 1.047354696s 1.049484123s 1.0507667s 1.051473858s 1.062168595s 1.064284009s 1.064288595s 1.065195954s 1.065202546s 1.073736822s 1.073944466s 1.07427947s 1.074840075s 1.077099445s 1.079849978s 1.081876022s 1.0824644s 1.085347871s 1.090270407s 1.090986688s 1.091383714s 1.091855859s 1.094565818s 1.096108815s 1.100582019s 1.100828391s 1.104457779s 1.105699148s 1.108477753s 1.110419998s 1.113492992s 1.125707224s 1.130663813s 1.142291204s 1.151299375s 1.163136123s 1.166439596s 1.18043231s 1.180596411s 1.181989672s 1.186371667s 1.189228167s 1.198559833s 1.199657961s 1.206623479s 1.209986331s 1.212435767s 1.227782193s 1.246818882s 1.270516196s 1.274587176s 1.284698052s 1.289754191s 1.310878515s 1.322451432s 1.337646319s 1.378617274s 1.378719635s 1.401439437s 1.430855107s 1.432549272s 1.436850149s 1.454159001s 1.487903961s 1.524444317s 1.539713876s 1.541503875s 1.559178964s 1.573856215s 1.579730163s 1.586811865s 1.609624334s 1.610378448s 1.634711309s 1.659326561s 1.663072604s 1.70542527s 1.705883101s 1.710809544s 1.732007543s 1.73529227s 1.752975357s 1.75558841s 1.771855965s 1.798102252s 1.801853782s] Jul 19 23:34:16.082: INFO: 50 %ile: 1.025493792s Jul 19 23:34:16.082: INFO: 90 %ile: 1.541503875s Jul 19 23:34:16.082: INFO: 99 %ile: 1.798102252s Jul 19 23:34:16.082: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:34:16.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-376" for this suite. Jul 19 23:35:34.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:35:35.348: INFO: namespace svc-latency-376 deletion completed in 1m19.259826228s • [SLOW TEST:98.313 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:35:35.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jul 19 23:35:40.378: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3379 pod-service-account-36048cea-3c6e-49dd-b7f2-7efee4b4fe6f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jul 19 23:35:40.617: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3379 pod-service-account-36048cea-3c6e-49dd-b7f2-7efee4b4fe6f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jul 19 23:35:40.844: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3379 pod-service-account-36048cea-3c6e-49dd-b7f2-7efee4b4fe6f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:35:41.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3379" for this suite. Jul 19 23:35:47.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:35:47.154: INFO: namespace svcaccounts-3379 deletion completed in 6.11831937s • [SLOW TEST:11.806 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:35:47.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 19 23:35:47.238: INFO: Waiting up to 5m0s for pod "pod-e36d2cf3-76eb-4ccf-b963-a6a045ac3654" in namespace "emptydir-5177" to be "success or failure" Jul 19 23:35:47.260: INFO: Pod "pod-e36d2cf3-76eb-4ccf-b963-a6a045ac3654": Phase="Pending", Reason="", readiness=false. Elapsed: 21.231656ms Jul 19 23:35:49.487: INFO: Pod "pod-e36d2cf3-76eb-4ccf-b963-a6a045ac3654": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248328089s Jul 19 23:35:51.491: INFO: Pod "pod-e36d2cf3-76eb-4ccf-b963-a6a045ac3654": Phase="Running", Reason="", readiness=true. Elapsed: 4.252282193s Jul 19 23:35:53.523: INFO: Pod "pod-e36d2cf3-76eb-4ccf-b963-a6a045ac3654": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.284153755s STEP: Saw pod success Jul 19 23:35:53.523: INFO: Pod "pod-e36d2cf3-76eb-4ccf-b963-a6a045ac3654" satisfied condition "success or failure" Jul 19 23:35:53.526: INFO: Trying to get logs from node iruya-worker pod pod-e36d2cf3-76eb-4ccf-b963-a6a045ac3654 container test-container: STEP: delete the pod Jul 19 23:35:53.544: INFO: Waiting for pod pod-e36d2cf3-76eb-4ccf-b963-a6a045ac3654 to disappear Jul 19 23:35:53.565: INFO: Pod pod-e36d2cf3-76eb-4ccf-b963-a6a045ac3654 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:35:53.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5177" for this suite. Jul 19 23:35:59.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:35:59.685: INFO: namespace emptydir-5177 deletion completed in 6.116760344s • [SLOW TEST:12.530 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:35:59.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8648, will wait for the garbage collector to delete the pods Jul 19 23:36:05.887: INFO: Deleting Job.batch foo took: 6.132556ms Jul 19 23:36:06.187: INFO: Terminating Job.batch foo pods took: 300.334578ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:36:47.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8648" for this suite. Jul 19 23:36:55.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:36:56.581: INFO: namespace job-8648 deletion completed in 9.360760183s • [SLOW TEST:56.895 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:36:56.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jul 19 23:37:00.219: INFO: created pod pod-service-account-defaultsa Jul 19 23:37:00.219: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 19 23:37:00.518: INFO: created pod pod-service-account-mountsa Jul 19 23:37:00.518: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 19 23:37:00.521: INFO: created pod pod-service-account-nomountsa Jul 19 23:37:00.521: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 19 23:37:00.573: INFO: created pod pod-service-account-defaultsa-mountspec Jul 19 23:37:00.573: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 19 23:37:00.746: INFO: created pod pod-service-account-mountsa-mountspec Jul 19 23:37:00.746: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 19 23:37:00.792: INFO: created pod pod-service-account-nomountsa-mountspec Jul 19 23:37:00.792: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 19 23:37:00.904: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 19 23:37:00.904: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 19 23:37:00.946: INFO: created pod pod-service-account-mountsa-nomountspec Jul 19 23:37:00.946: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 19 23:37:01.116: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 19 23:37:01.116: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:37:01.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4145" for this suite. Jul 19 23:37:43.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:37:44.175: INFO: namespace svcaccounts-4145 deletion completed in 42.961100776s • [SLOW TEST:47.594 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:37:44.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jul 19 23:37:51.758: INFO: Successfully updated pod "labelsupdate4490b194-040d-468e-92c7-a51b025bf704" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:37:53.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5451" for this suite. Jul 19 23:38:17.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:38:17.938: INFO: namespace downward-api-5451 deletion completed in 24.15791245s • [SLOW TEST:33.763 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:38:17.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jul 19 23:38:18.066: INFO: namespace kubectl-6130 Jul 19 23:38:18.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6130' Jul 19 23:38:18.466: INFO: stderr: "" Jul 19 23:38:18.466: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 19 23:38:19.476: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:38:19.476: INFO: Found 0 / 1 Jul 19 23:38:20.620: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:38:20.620: INFO: Found 0 / 1 Jul 19 23:38:21.471: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:38:21.471: INFO: Found 0 / 1 Jul 19 23:38:22.525: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:38:22.525: INFO: Found 0 / 1 Jul 19 23:38:23.776: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:38:23.777: INFO: Found 0 / 1 Jul 19 23:38:24.470: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:38:24.471: INFO: Found 0 / 1 Jul 19 23:38:25.471: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:38:25.471: INFO: Found 1 / 1 Jul 19 23:38:25.471: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 19 23:38:25.474: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:38:25.474: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 19 23:38:25.474: INFO: wait on redis-master startup in kubectl-6130 Jul 19 23:38:25.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ntr8v redis-master --namespace=kubectl-6130' Jul 19 23:38:25.588: INFO: stderr: "" Jul 19 23:38:25.588: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 Jul 23:38:24.596 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Jul 23:38:24.596 # Server started, Redis version 3.2.12\n1:M 19 Jul 23:38:24.596 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Jul 23:38:24.596 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jul 19 23:38:25.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6130' Jul 19 23:38:25.865: INFO: stderr: "" Jul 19 23:38:25.865: INFO: stdout: "service/rm2 exposed\n" Jul 19 23:38:25.879: INFO: Service rm2 in namespace kubectl-6130 found. STEP: exposing service Jul 19 23:38:27.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6130' Jul 19 23:38:28.489: INFO: stderr: "" Jul 19 23:38:28.489: INFO: stdout: "service/rm3 exposed\n" Jul 19 23:38:28.533: INFO: Service rm3 in namespace kubectl-6130 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:38:30.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6130" for this suite. Jul 19 23:38:52.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:38:53.022: INFO: namespace kubectl-6130 deletion completed in 22.479441642s • [SLOW TEST:35.084 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:38:53.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 19 23:38:53.376: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jul 19 23:38:53.386: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:38:53.391: INFO: Number of nodes with available pods: 0 Jul 19 23:38:53.391: INFO: Node iruya-worker is running more than one daemon pod Jul 19 23:38:54.395: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:38:54.398: INFO: Number of nodes with available pods: 0 Jul 19 23:38:54.398: INFO: Node iruya-worker is running more than one daemon pod Jul 19 23:38:55.395: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:38:55.398: INFO: Number of nodes with available pods: 0 Jul 19 23:38:55.398: INFO: Node iruya-worker is running more than one daemon pod Jul 19 23:38:56.397: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:38:56.400: INFO: Number of nodes with available pods: 0 Jul 19 23:38:56.400: INFO: Node iruya-worker is running more than one daemon pod Jul 19 23:38:57.399: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:38:57.402: INFO: Number of nodes with available pods: 1 Jul 19 23:38:57.402: INFO: Node iruya-worker2 is running more than one daemon pod Jul 19 23:38:58.396: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:38:58.399: INFO: Number of nodes with available pods: 2 Jul 19 23:38:58.399: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jul 19 23:38:58.469: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:38:58.469: INFO: Wrong image for pod: daemon-set-fzfqm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:38:58.504: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:38:59.508: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:38:59.508: INFO: Wrong image for pod: daemon-set-fzfqm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:38:59.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:00.513: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:00.513: INFO: Wrong image for pod: daemon-set-fzfqm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:00.517: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:01.509: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:01.509: INFO: Wrong image for pod: daemon-set-fzfqm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:01.509: INFO: Pod daemon-set-fzfqm is not available Jul 19 23:39:01.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:02.508: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:02.508: INFO: Pod daemon-set-k472m is not available Jul 19 23:39:02.510: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:03.509: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:03.509: INFO: Pod daemon-set-k472m is not available Jul 19 23:39:03.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:04.747: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:04.747: INFO: Pod daemon-set-k472m is not available Jul 19 23:39:04.767: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:05.509: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:05.509: INFO: Pod daemon-set-k472m is not available Jul 19 23:39:05.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:06.507: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:06.508: INFO: Pod daemon-set-k472m is not available Jul 19 23:39:06.510: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:07.508: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:07.508: INFO: Pod daemon-set-k472m is not available Jul 19 23:39:07.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:08.821: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:08.826: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:09.508: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:09.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:10.574: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:10.879: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:11.509: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:11.513: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:12.508: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:12.511: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:13.508: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:13.511: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:14.509: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:14.509: INFO: Pod daemon-set-44dzb is not available Jul 19 23:39:14.513: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:15.531: INFO: Wrong image for pod: daemon-set-44dzb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 19 23:39:15.531: INFO: Pod daemon-set-44dzb is not available Jul 19 23:39:15.536: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:16.541: INFO: Pod daemon-set-xwc6x is not available Jul 19 23:39:16.697: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jul 19 23:39:16.744: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:16.767: INFO: Number of nodes with available pods: 1 Jul 19 23:39:16.767: INFO: Node iruya-worker2 is running more than one daemon pod Jul 19 23:39:17.771: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:17.774: INFO: Number of nodes with available pods: 1 Jul 19 23:39:17.775: INFO: Node iruya-worker2 is running more than one daemon pod Jul 19 23:39:19.324: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:19.329: INFO: Number of nodes with available pods: 1 Jul 19 23:39:19.329: INFO: Node iruya-worker2 is running more than one daemon pod Jul 19 23:39:19.784: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:19.788: INFO: Number of nodes with available pods: 1 Jul 19 23:39:19.788: INFO: Node iruya-worker2 is running more than one daemon pod Jul 19 23:39:20.774: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:20.777: INFO: Number of nodes with available pods: 1 Jul 19 23:39:20.777: INFO: Node iruya-worker2 is running more than one daemon pod Jul 19 23:39:21.773: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:21.776: INFO: Number of nodes with available pods: 1 Jul 19 23:39:21.776: INFO: Node iruya-worker2 is running more than one daemon pod Jul 19 23:39:22.772: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 19 23:39:22.776: INFO: Number of nodes with available pods: 2 Jul 19 23:39:22.776: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1980, will wait for the garbage collector to delete the pods Jul 19 23:39:22.863: INFO: Deleting DaemonSet.extensions daemon-set took: 14.972708ms Jul 19 23:39:23.163: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.252755ms Jul 19 23:39:27.067: INFO: Number of nodes with available pods: 0 Jul 19 23:39:27.067: INFO: Number of running nodes: 0, number of available pods: 0 Jul 19 23:39:27.070: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1980/daemonsets","resourceVersion":"34377"},"items":null} Jul 19 23:39:27.073: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1980/pods","resourceVersion":"34377"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:39:27.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1980" for this suite. Jul 19 23:39:33.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:39:33.200: INFO: namespace daemonsets-1980 deletion completed in 6.114647774s • [SLOW TEST:40.177 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:39:33.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-2214a865-441b-44ec-ae62-b8e178ab3f87 STEP: Creating a pod to test consume configMaps Jul 19 23:39:33.263: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-06044e2e-3b04-4834-81fa-60d692b2da79" in namespace "projected-2516" to be "success or failure" Jul 19 23:39:33.267: INFO: Pod "pod-projected-configmaps-06044e2e-3b04-4834-81fa-60d692b2da79": Phase="Pending", Reason="", readiness=false. Elapsed: 3.616088ms Jul 19 23:39:35.442: INFO: Pod "pod-projected-configmaps-06044e2e-3b04-4834-81fa-60d692b2da79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178949403s Jul 19 23:39:37.447: INFO: Pod "pod-projected-configmaps-06044e2e-3b04-4834-81fa-60d692b2da79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183438292s Jul 19 23:39:39.451: INFO: Pod "pod-projected-configmaps-06044e2e-3b04-4834-81fa-60d692b2da79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.187187497s STEP: Saw pod success Jul 19 23:39:39.451: INFO: Pod "pod-projected-configmaps-06044e2e-3b04-4834-81fa-60d692b2da79" satisfied condition "success or failure" Jul 19 23:39:39.454: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-06044e2e-3b04-4834-81fa-60d692b2da79 container projected-configmap-volume-test: STEP: delete the pod Jul 19 23:39:39.593: INFO: Waiting for pod pod-projected-configmaps-06044e2e-3b04-4834-81fa-60d692b2da79 to disappear Jul 19 23:39:39.784: INFO: Pod pod-projected-configmaps-06044e2e-3b04-4834-81fa-60d692b2da79 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:39:39.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2516" for this suite. Jul 19 23:39:45.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:39:45.873: INFO: namespace projected-2516 deletion completed in 6.085139145s • [SLOW TEST:12.673 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:39:45.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jul 19 23:39:46.118: INFO: Pod name pod-release: Found 0 pods out of 1 Jul 19 23:39:51.244: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:39:52.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8089" for this suite. Jul 19 23:40:00.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:40:00.847: INFO: namespace replication-controller-8089 deletion completed in 8.568874145s • [SLOW TEST:14.974 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:40:00.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-aefb0a90-a17d-4f08-b946-2128ea92e3c0 STEP: Creating secret with name secret-projected-all-test-volume-26a9695e-9e4a-4221-96fa-75b1be4b9f99 STEP: Creating a pod to test Check all projections for projected volume plugin Jul 19 23:40:01.258: INFO: Waiting up to 5m0s for pod "projected-volume-c87d8e26-0ed3-4ef8-a5c6-db586416fa3c" in namespace "projected-6202" to be "success or failure" Jul 19 23:40:01.280: INFO: Pod "projected-volume-c87d8e26-0ed3-4ef8-a5c6-db586416fa3c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.606017ms Jul 19 23:40:03.316: INFO: Pod "projected-volume-c87d8e26-0ed3-4ef8-a5c6-db586416fa3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057589279s Jul 19 23:40:05.346: INFO: Pod "projected-volume-c87d8e26-0ed3-4ef8-a5c6-db586416fa3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08757863s Jul 19 23:40:07.350: INFO: Pod "projected-volume-c87d8e26-0ed3-4ef8-a5c6-db586416fa3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091481696s STEP: Saw pod success Jul 19 23:40:07.350: INFO: Pod "projected-volume-c87d8e26-0ed3-4ef8-a5c6-db586416fa3c" satisfied condition "success or failure" Jul 19 23:40:07.353: INFO: Trying to get logs from node iruya-worker pod projected-volume-c87d8e26-0ed3-4ef8-a5c6-db586416fa3c container projected-all-volume-test: STEP: delete the pod Jul 19 23:40:07.405: INFO: Waiting for pod projected-volume-c87d8e26-0ed3-4ef8-a5c6-db586416fa3c to disappear Jul 19 23:40:07.414: INFO: Pod projected-volume-c87d8e26-0ed3-4ef8-a5c6-db586416fa3c no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:40:07.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6202" for this suite. Jul 19 23:40:13.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:40:13.497: INFO: namespace projected-6202 deletion completed in 6.079269925s • [SLOW TEST:12.649 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:40:13.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5275b450-321e-4418-b297-0ce8ae99e568 STEP: Creating a pod to test consume secrets Jul 19 23:40:13.620: INFO: Waiting up to 5m0s for pod "pod-secrets-cedc7d4b-a2ad-4ea6-b77f-e580cc1f8cc8" in namespace "secrets-5305" to be "success or failure" Jul 19 23:40:13.637: INFO: Pod "pod-secrets-cedc7d4b-a2ad-4ea6-b77f-e580cc1f8cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.277031ms Jul 19 23:40:15.898: INFO: Pod "pod-secrets-cedc7d4b-a2ad-4ea6-b77f-e580cc1f8cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277546847s Jul 19 23:40:17.902: INFO: Pod "pod-secrets-cedc7d4b-a2ad-4ea6-b77f-e580cc1f8cc8": Phase="Running", Reason="", readiness=true. Elapsed: 4.282036535s Jul 19 23:40:19.907: INFO: Pod "pod-secrets-cedc7d4b-a2ad-4ea6-b77f-e580cc1f8cc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.286471868s STEP: Saw pod success Jul 19 23:40:19.907: INFO: Pod "pod-secrets-cedc7d4b-a2ad-4ea6-b77f-e580cc1f8cc8" satisfied condition "success or failure" Jul 19 23:40:19.910: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-cedc7d4b-a2ad-4ea6-b77f-e580cc1f8cc8 container secret-volume-test: STEP: delete the pod Jul 19 23:40:19.983: INFO: Waiting for pod pod-secrets-cedc7d4b-a2ad-4ea6-b77f-e580cc1f8cc8 to disappear Jul 19 23:40:19.990: INFO: Pod pod-secrets-cedc7d4b-a2ad-4ea6-b77f-e580cc1f8cc8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:40:19.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5305" for this suite. Jul 19 23:40:26.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:40:27.199: INFO: namespace secrets-5305 deletion completed in 7.202023366s • [SLOW TEST:13.701 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:40:27.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 19 23:40:32.599: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:40:32.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1807" for this suite. Jul 19 23:40:38.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:40:38.734: INFO: namespace container-runtime-1807 deletion completed in 6.092933245s • [SLOW TEST:11.535 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:40:38.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-a047777d-fd70-4cf1-b50c-b4ef2d978f86 STEP: Creating a pod to test consume configMaps Jul 19 23:40:38.840: INFO: Waiting up to 5m0s for pod "pod-configmaps-8bc2b043-28c8-433e-a9c8-090a4bff10fa" in namespace "configmap-1109" to be "success or failure" Jul 19 23:40:38.916: INFO: Pod "pod-configmaps-8bc2b043-28c8-433e-a9c8-090a4bff10fa": Phase="Pending", Reason="", readiness=false. Elapsed: 75.91466ms Jul 19 23:40:40.920: INFO: Pod "pod-configmaps-8bc2b043-28c8-433e-a9c8-090a4bff10fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080237837s Jul 19 23:40:42.924: INFO: Pod "pod-configmaps-8bc2b043-28c8-433e-a9c8-090a4bff10fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08382402s Jul 19 23:40:44.928: INFO: Pod "pod-configmaps-8bc2b043-28c8-433e-a9c8-090a4bff10fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087886554s STEP: Saw pod success Jul 19 23:40:44.928: INFO: Pod "pod-configmaps-8bc2b043-28c8-433e-a9c8-090a4bff10fa" satisfied condition "success or failure" Jul 19 23:40:44.931: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8bc2b043-28c8-433e-a9c8-090a4bff10fa container configmap-volume-test: STEP: delete the pod Jul 19 23:40:44.953: INFO: Waiting for pod pod-configmaps-8bc2b043-28c8-433e-a9c8-090a4bff10fa to disappear Jul 19 23:40:45.009: INFO: Pod pod-configmaps-8bc2b043-28c8-433e-a9c8-090a4bff10fa no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:40:45.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1109" for this suite. Jul 19 23:40:51.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:40:51.104: INFO: namespace configmap-1109 deletion completed in 6.090269894s • [SLOW TEST:12.369 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:40:51.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-4292/secret-test-90c514b9-6cbe-4495-809f-7da7e0ed41eb STEP: Creating a pod to test consume secrets Jul 19 23:40:51.262: INFO: Waiting up to 5m0s for pod "pod-configmaps-8a918b42-ef94-44f8-a469-4a59f10a5376" in namespace "secrets-4292" to be "success or failure" Jul 19 23:40:51.265: INFO: Pod "pod-configmaps-8a918b42-ef94-44f8-a469-4a59f10a5376": Phase="Pending", Reason="", readiness=false. Elapsed: 3.452558ms Jul 19 23:40:53.736: INFO: Pod "pod-configmaps-8a918b42-ef94-44f8-a469-4a59f10a5376": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474342042s Jul 19 23:40:55.740: INFO: Pod "pod-configmaps-8a918b42-ef94-44f8-a469-4a59f10a5376": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.478410605s STEP: Saw pod success Jul 19 23:40:55.740: INFO: Pod "pod-configmaps-8a918b42-ef94-44f8-a469-4a59f10a5376" satisfied condition "success or failure" Jul 19 23:40:55.742: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-8a918b42-ef94-44f8-a469-4a59f10a5376 container env-test: STEP: delete the pod Jul 19 23:40:55.775: INFO: Waiting for pod pod-configmaps-8a918b42-ef94-44f8-a469-4a59f10a5376 to disappear Jul 19 23:40:55.781: INFO: Pod pod-configmaps-8a918b42-ef94-44f8-a469-4a59f10a5376 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:40:55.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4292" for this suite. Jul 19 23:41:01.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:41:02.059: INFO: namespace secrets-4292 deletion completed in 6.274460658s • [SLOW TEST:10.954 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:41:02.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 19 23:41:02.147: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f59ccbc-8ef8-4d1d-a685-8e308f3da3af" in namespace "downward-api-8339" to be "success or failure" Jul 19 23:41:02.150: INFO: Pod "downwardapi-volume-5f59ccbc-8ef8-4d1d-a685-8e308f3da3af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.216693ms Jul 19 23:41:04.287: INFO: Pod "downwardapi-volume-5f59ccbc-8ef8-4d1d-a685-8e308f3da3af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140739416s Jul 19 23:41:06.291: INFO: Pod "downwardapi-volume-5f59ccbc-8ef8-4d1d-a685-8e308f3da3af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.144406848s STEP: Saw pod success Jul 19 23:41:06.291: INFO: Pod "downwardapi-volume-5f59ccbc-8ef8-4d1d-a685-8e308f3da3af" satisfied condition "success or failure" Jul 19 23:41:06.294: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5f59ccbc-8ef8-4d1d-a685-8e308f3da3af container client-container: STEP: delete the pod Jul 19 23:41:06.523: INFO: Waiting for pod downwardapi-volume-5f59ccbc-8ef8-4d1d-a685-8e308f3da3af to disappear Jul 19 23:41:06.563: INFO: Pod downwardapi-volume-5f59ccbc-8ef8-4d1d-a685-8e308f3da3af no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:41:06.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8339" for this suite. Jul 19 23:41:14.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:41:14.956: INFO: namespace downward-api-8339 deletion completed in 8.388211406s • [SLOW TEST:12.897 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:41:14.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jul 19 23:41:15.447: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9591" to be "success or failure" Jul 19 23:41:15.581: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 134.28207ms Jul 19 23:41:17.593: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146272588s Jul 19 23:41:19.598: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150836273s Jul 19 23:41:21.602: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 6.154954251s Jul 19 23:41:23.607: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.159724582s STEP: Saw pod success Jul 19 23:41:23.607: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jul 19 23:41:23.610: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jul 19 23:41:23.678: INFO: Waiting for pod pod-host-path-test to disappear Jul 19 23:41:23.685: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:41:23.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9591" for this suite. Jul 19 23:41:29.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:41:29.771: INFO: namespace hostpath-9591 deletion completed in 6.082406791s • [SLOW TEST:14.814 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:41:29.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7698 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 19 23:41:29.828: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 19 23:41:58.057: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.2:8080/dial?request=hostName&protocol=udp&host=10.244.1.254&port=8081&tries=1'] Namespace:pod-network-test-7698 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:41:58.057: INFO: >>> kubeConfig: /root/.kube/config I0719 23:41:58.089008 6 log.go:172] (0xc002b816b0) (0xc0028f2460) Create stream I0719 23:41:58.089038 6 log.go:172] (0xc002b816b0) (0xc0028f2460) Stream added, broadcasting: 1 I0719 23:41:58.091107 6 log.go:172] (0xc002b816b0) Reply frame received for 1 I0719 23:41:58.091162 6 log.go:172] (0xc002b816b0) (0xc0028f2500) Create stream I0719 23:41:58.091185 6 log.go:172] (0xc002b816b0) (0xc0028f2500) Stream added, broadcasting: 3 I0719 23:41:58.091955 6 log.go:172] (0xc002b816b0) Reply frame received for 3 I0719 23:41:58.091985 6 log.go:172] (0xc002b816b0) (0xc002a09c20) Create stream I0719 23:41:58.091994 6 log.go:172] (0xc002b816b0) (0xc002a09c20) Stream added, broadcasting: 5 I0719 23:41:58.092789 6 log.go:172] (0xc002b816b0) Reply frame received for 5 I0719 23:41:58.167578 6 log.go:172] (0xc002b816b0) Data frame received for 3 I0719 23:41:58.167621 6 log.go:172] (0xc0028f2500) (3) Data frame handling I0719 23:41:58.167647 6 log.go:172] (0xc0028f2500) (3) Data frame sent I0719 23:41:58.167891 6 log.go:172] (0xc002b816b0) Data frame received for 5 I0719 23:41:58.167906 6 log.go:172] (0xc002a09c20) (5) Data frame handling I0719 23:41:58.167930 6 log.go:172] (0xc002b816b0) Data frame received for 3 I0719 23:41:58.167989 6 log.go:172] (0xc0028f2500) (3) Data frame handling I0719 23:41:58.172975 6 log.go:172] (0xc002b816b0) Data frame received for 1 I0719 23:41:58.173009 6 log.go:172] (0xc0028f2460) (1) Data frame handling I0719 23:41:58.173041 6 log.go:172] (0xc0028f2460) (1) Data frame sent I0719 23:41:58.173063 6 log.go:172] (0xc002b816b0) (0xc0028f2460) Stream removed, broadcasting: 1 I0719 23:41:58.173087 6 log.go:172] (0xc002b816b0) Go away received I0719 23:41:58.173213 6 log.go:172] (0xc002b816b0) (0xc0028f2460) Stream removed, broadcasting: 1 I0719 23:41:58.173233 6 log.go:172] (0xc002b816b0) (0xc0028f2500) Stream removed, broadcasting: 3 I0719 23:41:58.173241 6 log.go:172] (0xc002b816b0) (0xc002a09c20) Stream removed, broadcasting: 5 Jul 19 23:41:58.173: INFO: Waiting for endpoints: map[] Jul 19 23:41:58.177: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.2:8080/dial?request=hostName&protocol=udp&host=10.244.2.234&port=8081&tries=1'] Namespace:pod-network-test-7698 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:41:58.177: INFO: >>> kubeConfig: /root/.kube/config I0719 23:41:58.206377 6 log.go:172] (0xc0004500b0) (0xc001fd6000) Create stream I0719 23:41:58.206403 6 log.go:172] (0xc0004500b0) (0xc001fd6000) Stream added, broadcasting: 1 I0719 23:41:58.207963 6 log.go:172] (0xc0004500b0) Reply frame received for 1 I0719 23:41:58.208022 6 log.go:172] (0xc0004500b0) (0xc001fd6140) Create stream I0719 23:41:58.208037 6 log.go:172] (0xc0004500b0) (0xc001fd6140) Stream added, broadcasting: 3 I0719 23:41:58.209141 6 log.go:172] (0xc0004500b0) Reply frame received for 3 I0719 23:41:58.209176 6 log.go:172] (0xc0004500b0) (0xc0010ba000) Create stream I0719 23:41:58.209191 6 log.go:172] (0xc0004500b0) (0xc0010ba000) Stream added, broadcasting: 5 I0719 23:41:58.209994 6 log.go:172] (0xc0004500b0) Reply frame received for 5 I0719 23:41:58.270585 6 log.go:172] (0xc0004500b0) Data frame received for 5 I0719 23:41:58.270620 6 log.go:172] (0xc0010ba000) (5) Data frame handling I0719 23:41:58.270641 6 log.go:172] (0xc0004500b0) Data frame received for 3 I0719 23:41:58.270651 6 log.go:172] (0xc001fd6140) (3) Data frame handling I0719 23:41:58.270663 6 log.go:172] (0xc001fd6140) (3) Data frame sent I0719 23:41:58.270674 6 log.go:172] (0xc0004500b0) Data frame received for 3 I0719 23:41:58.270687 6 log.go:172] (0xc001fd6140) (3) Data frame handling I0719 23:41:58.271709 6 log.go:172] (0xc0004500b0) Data frame received for 1 I0719 23:41:58.271723 6 log.go:172] (0xc001fd6000) (1) Data frame handling I0719 23:41:58.271741 6 log.go:172] (0xc001fd6000) (1) Data frame sent I0719 23:41:58.271756 6 log.go:172] (0xc0004500b0) (0xc001fd6000) Stream removed, broadcasting: 1 I0719 23:41:58.271775 6 log.go:172] (0xc0004500b0) Go away received I0719 23:41:58.271875 6 log.go:172] (0xc0004500b0) (0xc001fd6000) Stream removed, broadcasting: 1 I0719 23:41:58.271892 6 log.go:172] (0xc0004500b0) (0xc001fd6140) Stream removed, broadcasting: 3 I0719 23:41:58.271900 6 log.go:172] (0xc0004500b0) (0xc0010ba000) Stream removed, broadcasting: 5 Jul 19 23:41:58.271: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:41:58.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7698" for this suite. Jul 19 23:42:24.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:42:24.543: INFO: namespace pod-network-test-7698 deletion completed in 26.173487055s • [SLOW TEST:54.772 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:42:24.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jul 19 23:42:24.746: INFO: Waiting up to 5m0s for pod "var-expansion-880883e2-93a9-4729-9c1d-695712f784d8" in namespace "var-expansion-2065" to be "success or failure" Jul 19 23:42:24.750: INFO: Pod "var-expansion-880883e2-93a9-4729-9c1d-695712f784d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.484436ms Jul 19 23:42:26.835: INFO: Pod "var-expansion-880883e2-93a9-4729-9c1d-695712f784d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089088084s Jul 19 23:42:28.839: INFO: Pod "var-expansion-880883e2-93a9-4729-9c1d-695712f784d8": Phase="Running", Reason="", readiness=true. Elapsed: 4.093019722s Jul 19 23:42:30.843: INFO: Pod "var-expansion-880883e2-93a9-4729-9c1d-695712f784d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097044293s STEP: Saw pod success Jul 19 23:42:30.843: INFO: Pod "var-expansion-880883e2-93a9-4729-9c1d-695712f784d8" satisfied condition "success or failure" Jul 19 23:42:30.846: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-880883e2-93a9-4729-9c1d-695712f784d8 container dapi-container: STEP: delete the pod Jul 19 23:42:30.877: INFO: Waiting for pod var-expansion-880883e2-93a9-4729-9c1d-695712f784d8 to disappear Jul 19 23:42:30.889: INFO: Pod var-expansion-880883e2-93a9-4729-9c1d-695712f784d8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:42:30.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2065" for this suite. Jul 19 23:42:36.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:42:36.984: INFO: namespace var-expansion-2065 deletion completed in 6.091610755s • [SLOW TEST:12.441 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:42:36.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jul 19 23:42:37.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9008' Jul 19 23:42:39.813: INFO: stderr: "" Jul 19 23:42:39.813: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 19 23:42:39.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9008' Jul 19 23:42:39.927: INFO: stderr: "" Jul 19 23:42:39.927: INFO: stdout: "update-demo-nautilus-mwgcq update-demo-nautilus-mzjp2 " Jul 19 23:42:39.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mwgcq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9008' Jul 19 23:42:40.038: INFO: stderr: "" Jul 19 23:42:40.038: INFO: stdout: "" Jul 19 23:42:40.038: INFO: update-demo-nautilus-mwgcq is created but not running Jul 19 23:42:45.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9008' Jul 19 23:42:45.142: INFO: stderr: "" Jul 19 23:42:45.142: INFO: stdout: "update-demo-nautilus-mwgcq update-demo-nautilus-mzjp2 " Jul 19 23:42:45.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mwgcq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9008' Jul 19 23:42:45.235: INFO: stderr: "" Jul 19 23:42:45.236: INFO: stdout: "true" Jul 19 23:42:45.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mwgcq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9008' Jul 19 23:42:45.331: INFO: stderr: "" Jul 19 23:42:45.331: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 19 23:42:45.331: INFO: validating pod update-demo-nautilus-mwgcq Jul 19 23:42:45.335: INFO: got data: { "image": "nautilus.jpg" } Jul 19 23:42:45.335: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 19 23:42:45.335: INFO: update-demo-nautilus-mwgcq is verified up and running Jul 19 23:42:45.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzjp2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9008' Jul 19 23:42:45.427: INFO: stderr: "" Jul 19 23:42:45.427: INFO: stdout: "true" Jul 19 23:42:45.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzjp2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9008' Jul 19 23:42:45.515: INFO: stderr: "" Jul 19 23:42:45.515: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 19 23:42:45.515: INFO: validating pod update-demo-nautilus-mzjp2 Jul 19 23:42:45.519: INFO: got data: { "image": "nautilus.jpg" } Jul 19 23:42:45.519: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 19 23:42:45.519: INFO: update-demo-nautilus-mzjp2 is verified up and running STEP: using delete to clean up resources Jul 19 23:42:45.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9008' Jul 19 23:42:45.665: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 19 23:42:45.665: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 19 23:42:45.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9008' Jul 19 23:42:45.773: INFO: stderr: "No resources found.\n" Jul 19 23:42:45.773: INFO: stdout: "" Jul 19 23:42:45.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9008 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 19 23:42:45.867: INFO: stderr: "" Jul 19 23:42:45.867: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:42:45.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9008" for this suite. Jul 19 23:43:07.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:43:08.001: INFO: namespace kubectl-9008 deletion completed in 22.119129405s • [SLOW TEST:31.016 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:43:08.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 19 23:43:08.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-314832a0-5c86-482a-a289-b4db656443cf" in namespace "downward-api-6072" to be "success or failure" Jul 19 23:43:08.333: INFO: Pod "downwardapi-volume-314832a0-5c86-482a-a289-b4db656443cf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.484387ms Jul 19 23:43:10.984: INFO: Pod "downwardapi-volume-314832a0-5c86-482a-a289-b4db656443cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662465665s Jul 19 23:43:13.552: INFO: Pod "downwardapi-volume-314832a0-5c86-482a-a289-b4db656443cf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.230829147s Jul 19 23:43:15.888: INFO: Pod "downwardapi-volume-314832a0-5c86-482a-a289-b4db656443cf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.56678457s Jul 19 23:43:18.184: INFO: Pod "downwardapi-volume-314832a0-5c86-482a-a289-b4db656443cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.862252284s STEP: Saw pod success Jul 19 23:43:18.184: INFO: Pod "downwardapi-volume-314832a0-5c86-482a-a289-b4db656443cf" satisfied condition "success or failure" Jul 19 23:43:18.368: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-314832a0-5c86-482a-a289-b4db656443cf container client-container: STEP: delete the pod Jul 19 23:43:19.289: INFO: Waiting for pod downwardapi-volume-314832a0-5c86-482a-a289-b4db656443cf to disappear Jul 19 23:43:19.292: INFO: Pod downwardapi-volume-314832a0-5c86-482a-a289-b4db656443cf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:43:19.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6072" for this suite. Jul 19 23:43:27.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:43:27.454: INFO: namespace downward-api-6072 deletion completed in 8.159640138s • [SLOW TEST:19.453 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:43:27.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 19 23:43:42.140: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6512 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:43:42.140: INFO: >>> kubeConfig: /root/.kube/config I0719 23:43:42.179884 6 log.go:172] (0xc00157ca50) (0xc002c803c0) Create stream I0719 23:43:42.179920 6 log.go:172] (0xc00157ca50) (0xc002c803c0) Stream added, broadcasting: 1 I0719 23:43:42.182215 6 log.go:172] (0xc00157ca50) Reply frame received for 1 I0719 23:43:42.182262 6 log.go:172] (0xc00157ca50) (0xc001b3c820) Create stream I0719 23:43:42.182277 6 log.go:172] (0xc00157ca50) (0xc001b3c820) Stream added, broadcasting: 3 I0719 23:43:42.183961 6 log.go:172] (0xc00157ca50) Reply frame received for 3 I0719 23:43:42.184023 6 log.go:172] (0xc00157ca50) (0xc002c80460) Create stream I0719 23:43:42.184042 6 log.go:172] (0xc00157ca50) (0xc002c80460) Stream added, broadcasting: 5 I0719 23:43:42.185030 6 log.go:172] (0xc00157ca50) Reply frame received for 5 I0719 23:43:42.234325 6 log.go:172] (0xc00157ca50) Data frame received for 3 I0719 23:43:42.234356 6 log.go:172] (0xc001b3c820) (3) Data frame handling I0719 23:43:42.234376 6 log.go:172] (0xc001b3c820) (3) Data frame sent I0719 23:43:42.234384 6 log.go:172] (0xc00157ca50) Data frame received for 3 I0719 23:43:42.234391 6 log.go:172] (0xc001b3c820) (3) Data frame handling I0719 23:43:42.234412 6 log.go:172] (0xc00157ca50) Data frame received for 5 I0719 23:43:42.234420 6 log.go:172] (0xc002c80460) (5) Data frame handling I0719 23:43:42.236365 6 log.go:172] (0xc00157ca50) Data frame received for 1 I0719 23:43:42.236391 6 log.go:172] (0xc002c803c0) (1) Data frame handling I0719 23:43:42.236414 6 log.go:172] (0xc002c803c0) (1) Data frame sent I0719 23:43:42.236575 6 log.go:172] (0xc00157ca50) (0xc002c803c0) Stream removed, broadcasting: 1 I0719 23:43:42.236604 6 log.go:172] (0xc00157ca50) Go away received I0719 23:43:42.236816 6 log.go:172] (0xc00157ca50) (0xc002c803c0) Stream removed, broadcasting: 1 I0719 23:43:42.236860 6 log.go:172] (0xc00157ca50) (0xc001b3c820) Stream removed, broadcasting: 3 I0719 23:43:42.236872 6 log.go:172] (0xc00157ca50) (0xc002c80460) Stream removed, broadcasting: 5 Jul 19 23:43:42.236: INFO: Exec stderr: "" Jul 19 23:43:42.236: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6512 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:43:42.236: INFO: >>> kubeConfig: /root/.kube/config I0719 23:43:42.273725 6 log.go:172] (0xc00157da20) (0xc002c80780) Create stream I0719 23:43:42.273753 6 log.go:172] (0xc00157da20) (0xc002c80780) Stream added, broadcasting: 1 I0719 23:43:42.275842 6 log.go:172] (0xc00157da20) Reply frame received for 1 I0719 23:43:42.275883 6 log.go:172] (0xc00157da20) (0xc001b3c8c0) Create stream I0719 23:43:42.275899 6 log.go:172] (0xc00157da20) (0xc001b3c8c0) Stream added, broadcasting: 3 I0719 23:43:42.277162 6 log.go:172] (0xc00157da20) Reply frame received for 3 I0719 23:43:42.277216 6 log.go:172] (0xc00157da20) (0xc001b3caa0) Create stream I0719 23:43:42.277229 6 log.go:172] (0xc00157da20) (0xc001b3caa0) Stream added, broadcasting: 5 I0719 23:43:42.277912 6 log.go:172] (0xc00157da20) Reply frame received for 5 I0719 23:43:42.335820 6 log.go:172] (0xc00157da20) Data frame received for 5 I0719 23:43:42.335865 6 log.go:172] (0xc001b3caa0) (5) Data frame handling I0719 23:43:42.335906 6 log.go:172] (0xc00157da20) Data frame received for 3 I0719 23:43:42.335918 6 log.go:172] (0xc001b3c8c0) (3) Data frame handling I0719 23:43:42.335927 6 log.go:172] (0xc001b3c8c0) (3) Data frame sent I0719 23:43:42.335937 6 log.go:172] (0xc00157da20) Data frame received for 3 I0719 23:43:42.335942 6 log.go:172] (0xc001b3c8c0) (3) Data frame handling I0719 23:43:42.337628 6 log.go:172] (0xc00157da20) Data frame received for 1 I0719 23:43:42.337651 6 log.go:172] (0xc002c80780) (1) Data frame handling I0719 23:43:42.337658 6 log.go:172] (0xc002c80780) (1) Data frame sent I0719 23:43:42.337673 6 log.go:172] (0xc00157da20) (0xc002c80780) Stream removed, broadcasting: 1 I0719 23:43:42.337684 6 log.go:172] (0xc00157da20) Go away received I0719 23:43:42.337861 6 log.go:172] (0xc00157da20) (0xc002c80780) Stream removed, broadcasting: 1 I0719 23:43:42.337872 6 log.go:172] (0xc00157da20) (0xc001b3c8c0) Stream removed, broadcasting: 3 I0719 23:43:42.337878 6 log.go:172] (0xc00157da20) (0xc001b3caa0) Stream removed, broadcasting: 5 Jul 19 23:43:42.337: INFO: Exec stderr: "" Jul 19 23:43:42.337: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6512 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:43:42.337: INFO: >>> kubeConfig: /root/.kube/config I0719 23:43:42.369214 6 log.go:172] (0xc0025cb4a0) (0xc001b3d0e0) Create stream I0719 23:43:42.369249 6 log.go:172] (0xc0025cb4a0) (0xc001b3d0e0) Stream added, broadcasting: 1 I0719 23:43:42.371966 6 log.go:172] (0xc0025cb4a0) Reply frame received for 1 I0719 23:43:42.371997 6 log.go:172] (0xc0025cb4a0) (0xc002c80820) Create stream I0719 23:43:42.372008 6 log.go:172] (0xc0025cb4a0) (0xc002c80820) Stream added, broadcasting: 3 I0719 23:43:42.372974 6 log.go:172] (0xc0025cb4a0) Reply frame received for 3 I0719 23:43:42.373038 6 log.go:172] (0xc0025cb4a0) (0xc0028f2000) Create stream I0719 23:43:42.373056 6 log.go:172] (0xc0025cb4a0) (0xc0028f2000) Stream added, broadcasting: 5 I0719 23:43:42.374086 6 log.go:172] (0xc0025cb4a0) Reply frame received for 5 I0719 23:43:42.441992 6 log.go:172] (0xc0025cb4a0) Data frame received for 5 I0719 23:43:42.442046 6 log.go:172] (0xc0028f2000) (5) Data frame handling I0719 23:43:42.442088 6 log.go:172] (0xc0025cb4a0) Data frame received for 3 I0719 23:43:42.442109 6 log.go:172] (0xc002c80820) (3) Data frame handling I0719 23:43:42.442135 6 log.go:172] (0xc002c80820) (3) Data frame sent I0719 23:43:42.442155 6 log.go:172] (0xc0025cb4a0) Data frame received for 3 I0719 23:43:42.442173 6 log.go:172] (0xc002c80820) (3) Data frame handling I0719 23:43:42.443445 6 log.go:172] (0xc0025cb4a0) Data frame received for 1 I0719 23:43:42.443467 6 log.go:172] (0xc001b3d0e0) (1) Data frame handling I0719 23:43:42.443480 6 log.go:172] (0xc001b3d0e0) (1) Data frame sent I0719 23:43:42.443488 6 log.go:172] (0xc0025cb4a0) (0xc001b3d0e0) Stream removed, broadcasting: 1 I0719 23:43:42.443546 6 log.go:172] (0xc0025cb4a0) Go away received I0719 23:43:42.443592 6 log.go:172] (0xc0025cb4a0) (0xc001b3d0e0) Stream removed, broadcasting: 1 I0719 23:43:42.443613 6 log.go:172] (0xc0025cb4a0) (0xc002c80820) Stream removed, broadcasting: 3 I0719 23:43:42.443624 6 log.go:172] (0xc0025cb4a0) (0xc0028f2000) Stream removed, broadcasting: 5 Jul 19 23:43:42.443: INFO: Exec stderr: "" Jul 19 23:43:42.443: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6512 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:43:42.443: INFO: >>> kubeConfig: /root/.kube/config I0719 23:43:42.478076 6 log.go:172] (0xc0022fd760) (0xc0028f2320) Create stream I0719 23:43:42.478102 6 log.go:172] (0xc0022fd760) (0xc0028f2320) Stream added, broadcasting: 1 I0719 23:43:42.481568 6 log.go:172] (0xc0022fd760) Reply frame received for 1 I0719 23:43:42.481675 6 log.go:172] (0xc0022fd760) (0xc00177d7c0) Create stream I0719 23:43:42.481719 6 log.go:172] (0xc0022fd760) (0xc00177d7c0) Stream added, broadcasting: 3 I0719 23:43:42.483015 6 log.go:172] (0xc0022fd760) Reply frame received for 3 I0719 23:43:42.483056 6 log.go:172] (0xc0022fd760) (0xc000aacb40) Create stream I0719 23:43:42.483070 6 log.go:172] (0xc0022fd760) (0xc000aacb40) Stream added, broadcasting: 5 I0719 23:43:42.483924 6 log.go:172] (0xc0022fd760) Reply frame received for 5 I0719 23:43:42.557567 6 log.go:172] (0xc0022fd760) Data frame received for 5 I0719 23:43:42.557609 6 log.go:172] (0xc000aacb40) (5) Data frame handling I0719 23:43:42.557635 6 log.go:172] (0xc0022fd760) Data frame received for 3 I0719 23:43:42.557649 6 log.go:172] (0xc00177d7c0) (3) Data frame handling I0719 23:43:42.557664 6 log.go:172] (0xc00177d7c0) (3) Data frame sent I0719 23:43:42.557677 6 log.go:172] (0xc0022fd760) Data frame received for 3 I0719 23:43:42.557707 6 log.go:172] (0xc00177d7c0) (3) Data frame handling I0719 23:43:42.559960 6 log.go:172] (0xc0022fd760) Data frame received for 1 I0719 23:43:42.559991 6 log.go:172] (0xc0028f2320) (1) Data frame handling I0719 23:43:42.560013 6 log.go:172] (0xc0028f2320) (1) Data frame sent I0719 23:43:42.560030 6 log.go:172] (0xc0022fd760) (0xc0028f2320) Stream removed, broadcasting: 1 I0719 23:43:42.560048 6 log.go:172] (0xc0022fd760) Go away received I0719 23:43:42.560190 6 log.go:172] (0xc0022fd760) (0xc0028f2320) Stream removed, broadcasting: 1 I0719 23:43:42.560224 6 log.go:172] (0xc0022fd760) (0xc00177d7c0) Stream removed, broadcasting: 3 I0719 23:43:42.560243 6 log.go:172] (0xc0022fd760) (0xc000aacb40) Stream removed, broadcasting: 5 Jul 19 23:43:42.560: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 19 23:43:42.560: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6512 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:43:42.560: INFO: >>> kubeConfig: /root/.kube/config I0719 23:43:42.593909 6 log.go:172] (0xc00151b3f0) (0xc00177dc20) Create stream I0719 23:43:42.593937 6 log.go:172] (0xc00151b3f0) (0xc00177dc20) Stream added, broadcasting: 1 I0719 23:43:42.595817 6 log.go:172] (0xc00151b3f0) Reply frame received for 1 I0719 23:43:42.595864 6 log.go:172] (0xc00151b3f0) (0xc0028f23c0) Create stream I0719 23:43:42.595876 6 log.go:172] (0xc00151b3f0) (0xc0028f23c0) Stream added, broadcasting: 3 I0719 23:43:42.596865 6 log.go:172] (0xc00151b3f0) Reply frame received for 3 I0719 23:43:42.596918 6 log.go:172] (0xc00151b3f0) (0xc0028f2460) Create stream I0719 23:43:42.596995 6 log.go:172] (0xc00151b3f0) (0xc0028f2460) Stream added, broadcasting: 5 I0719 23:43:42.597891 6 log.go:172] (0xc00151b3f0) Reply frame received for 5 I0719 23:43:42.657872 6 log.go:172] (0xc00151b3f0) Data frame received for 3 I0719 23:43:42.657909 6 log.go:172] (0xc0028f23c0) (3) Data frame handling I0719 23:43:42.657923 6 log.go:172] (0xc0028f23c0) (3) Data frame sent I0719 23:43:42.657933 6 log.go:172] (0xc00151b3f0) Data frame received for 3 I0719 23:43:42.657945 6 log.go:172] (0xc0028f23c0) (3) Data frame handling I0719 23:43:42.658012 6 log.go:172] (0xc00151b3f0) Data frame received for 5 I0719 23:43:42.658071 6 log.go:172] (0xc0028f2460) (5) Data frame handling I0719 23:43:42.659614 6 log.go:172] (0xc00151b3f0) Data frame received for 1 I0719 23:43:42.659638 6 log.go:172] (0xc00177dc20) (1) Data frame handling I0719 23:43:42.659651 6 log.go:172] (0xc00177dc20) (1) Data frame sent I0719 23:43:42.659669 6 log.go:172] (0xc00151b3f0) (0xc00177dc20) Stream removed, broadcasting: 1 I0719 23:43:42.659706 6 log.go:172] (0xc00151b3f0) Go away received I0719 23:43:42.659808 6 log.go:172] (0xc00151b3f0) (0xc00177dc20) Stream removed, broadcasting: 1 I0719 23:43:42.659843 6 log.go:172] (0xc00151b3f0) (0xc0028f23c0) Stream removed, broadcasting: 3 I0719 23:43:42.659858 6 log.go:172] (0xc00151b3f0) (0xc0028f2460) Stream removed, broadcasting: 5 Jul 19 23:43:42.659: INFO: Exec stderr: "" Jul 19 23:43:42.659: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6512 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:43:42.659: INFO: >>> kubeConfig: /root/.kube/config I0719 23:43:42.719498 6 log.go:172] (0xc0031be000) (0xc002240000) Create stream I0719 23:43:42.719536 6 log.go:172] (0xc0031be000) (0xc002240000) Stream added, broadcasting: 1 I0719 23:43:42.722125 6 log.go:172] (0xc0031be000) Reply frame received for 1 I0719 23:43:42.722177 6 log.go:172] (0xc0031be000) (0xc0028f2500) Create stream I0719 23:43:42.722189 6 log.go:172] (0xc0031be000) (0xc0028f2500) Stream added, broadcasting: 3 I0719 23:43:42.723224 6 log.go:172] (0xc0031be000) Reply frame received for 3 I0719 23:43:42.723289 6 log.go:172] (0xc0031be000) (0xc001b3d180) Create stream I0719 23:43:42.723309 6 log.go:172] (0xc0031be000) (0xc001b3d180) Stream added, broadcasting: 5 I0719 23:43:42.724512 6 log.go:172] (0xc0031be000) Reply frame received for 5 I0719 23:43:42.775372 6 log.go:172] (0xc0031be000) Data frame received for 5 I0719 23:43:42.775405 6 log.go:172] (0xc001b3d180) (5) Data frame handling I0719 23:43:42.775422 6 log.go:172] (0xc0031be000) Data frame received for 3 I0719 23:43:42.775434 6 log.go:172] (0xc0028f2500) (3) Data frame handling I0719 23:43:42.775451 6 log.go:172] (0xc0028f2500) (3) Data frame sent I0719 23:43:42.775458 6 log.go:172] (0xc0031be000) Data frame received for 3 I0719 23:43:42.775461 6 log.go:172] (0xc0028f2500) (3) Data frame handling I0719 23:43:42.777210 6 log.go:172] (0xc0031be000) Data frame received for 1 I0719 23:43:42.777235 6 log.go:172] (0xc002240000) (1) Data frame handling I0719 23:43:42.777244 6 log.go:172] (0xc002240000) (1) Data frame sent I0719 23:43:42.777251 6 log.go:172] (0xc0031be000) (0xc002240000) Stream removed, broadcasting: 1 I0719 23:43:42.777263 6 log.go:172] (0xc0031be000) Go away received I0719 23:43:42.777407 6 log.go:172] (0xc0031be000) (0xc002240000) Stream removed, broadcasting: 1 I0719 23:43:42.777444 6 log.go:172] (0xc0031be000) (0xc0028f2500) Stream removed, broadcasting: 3 I0719 23:43:42.777466 6 log.go:172] (0xc0031be000) (0xc001b3d180) Stream removed, broadcasting: 5 Jul 19 23:43:42.777: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 19 23:43:42.777: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6512 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:43:42.777: INFO: >>> kubeConfig: /root/.kube/config I0719 23:43:42.818388 6 log.go:172] (0xc002923290) (0xc000aad400) Create stream I0719 23:43:42.818432 6 log.go:172] (0xc002923290) (0xc000aad400) Stream added, broadcasting: 1 I0719 23:43:42.820666 6 log.go:172] (0xc002923290) Reply frame received for 1 I0719 23:43:42.820832 6 log.go:172] (0xc002923290) (0xc001b3d400) Create stream I0719 23:43:42.820859 6 log.go:172] (0xc002923290) (0xc001b3d400) Stream added, broadcasting: 3 I0719 23:43:42.821828 6 log.go:172] (0xc002923290) Reply frame received for 3 I0719 23:43:42.821856 6 log.go:172] (0xc002923290) (0xc001b3d540) Create stream I0719 23:43:42.821875 6 log.go:172] (0xc002923290) (0xc001b3d540) Stream added, broadcasting: 5 I0719 23:43:42.822880 6 log.go:172] (0xc002923290) Reply frame received for 5 I0719 23:43:42.876042 6 log.go:172] (0xc002923290) Data frame received for 5 I0719 23:43:42.876079 6 log.go:172] (0xc001b3d540) (5) Data frame handling I0719 23:43:42.876102 6 log.go:172] (0xc002923290) Data frame received for 3 I0719 23:43:42.876114 6 log.go:172] (0xc001b3d400) (3) Data frame handling I0719 23:43:42.876150 6 log.go:172] (0xc001b3d400) (3) Data frame sent I0719 23:43:42.876171 6 log.go:172] (0xc002923290) Data frame received for 3 I0719 23:43:42.876180 6 log.go:172] (0xc001b3d400) (3) Data frame handling I0719 23:43:42.877873 6 log.go:172] (0xc002923290) Data frame received for 1 I0719 23:43:42.877904 6 log.go:172] (0xc000aad400) (1) Data frame handling I0719 23:43:42.877923 6 log.go:172] (0xc000aad400) (1) Data frame sent I0719 23:43:42.877942 6 log.go:172] (0xc002923290) (0xc000aad400) Stream removed, broadcasting: 1 I0719 23:43:42.877962 6 log.go:172] (0xc002923290) Go away received I0719 23:43:42.878155 6 log.go:172] (0xc002923290) (0xc000aad400) Stream removed, broadcasting: 1 I0719 23:43:42.878193 6 log.go:172] (0xc002923290) (0xc001b3d400) Stream removed, broadcasting: 3 I0719 23:43:42.878219 6 log.go:172] (0xc002923290) (0xc001b3d540) Stream removed, broadcasting: 5 Jul 19 23:43:42.878: INFO: Exec stderr: "" Jul 19 23:43:42.878: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6512 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:43:42.878: INFO: >>> kubeConfig: /root/.kube/config I0719 23:43:42.909373 6 log.go:172] (0xc00285b1e0) (0xc002c80b40) Create stream I0719 23:43:42.909404 6 log.go:172] (0xc00285b1e0) (0xc002c80b40) Stream added, broadcasting: 1 I0719 23:43:42.911826 6 log.go:172] (0xc00285b1e0) Reply frame received for 1 I0719 23:43:42.911867 6 log.go:172] (0xc00285b1e0) (0xc001b3d5e0) Create stream I0719 23:43:42.911883 6 log.go:172] (0xc00285b1e0) (0xc001b3d5e0) Stream added, broadcasting: 3 I0719 23:43:42.912617 6 log.go:172] (0xc00285b1e0) Reply frame received for 3 I0719 23:43:42.912652 6 log.go:172] (0xc00285b1e0) (0xc002c80be0) Create stream I0719 23:43:42.912664 6 log.go:172] (0xc00285b1e0) (0xc002c80be0) Stream added, broadcasting: 5 I0719 23:43:42.913548 6 log.go:172] (0xc00285b1e0) Reply frame received for 5 I0719 23:43:42.974677 6 log.go:172] (0xc00285b1e0) Data frame received for 3 I0719 23:43:42.974729 6 log.go:172] (0xc001b3d5e0) (3) Data frame handling I0719 23:43:42.974753 6 log.go:172] (0xc001b3d5e0) (3) Data frame sent I0719 23:43:42.974773 6 log.go:172] (0xc00285b1e0) Data frame received for 3 I0719 23:43:42.974790 6 log.go:172] (0xc001b3d5e0) (3) Data frame handling I0719 23:43:42.974929 6 log.go:172] (0xc00285b1e0) Data frame received for 5 I0719 23:43:42.974996 6 log.go:172] (0xc002c80be0) (5) Data frame handling I0719 23:43:42.977259 6 log.go:172] (0xc00285b1e0) Data frame received for 1 I0719 23:43:42.977292 6 log.go:172] (0xc002c80b40) (1) Data frame handling I0719 23:43:42.977321 6 log.go:172] (0xc002c80b40) (1) Data frame sent I0719 23:43:42.977341 6 log.go:172] (0xc00285b1e0) (0xc002c80b40) Stream removed, broadcasting: 1 I0719 23:43:42.977426 6 log.go:172] (0xc00285b1e0) Go away received I0719 23:43:42.977520 6 log.go:172] (0xc00285b1e0) (0xc002c80b40) Stream removed, broadcasting: 1 I0719 23:43:42.977562 6 log.go:172] (0xc00285b1e0) (0xc001b3d5e0) Stream removed, broadcasting: 3 I0719 23:43:42.977581 6 log.go:172] (0xc00285b1e0) (0xc002c80be0) Stream removed, broadcasting: 5 Jul 19 23:43:42.977: INFO: Exec stderr: "" Jul 19 23:43:42.977: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6512 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:43:42.977: INFO: >>> kubeConfig: /root/.kube/config I0719 23:43:43.009965 6 log.go:172] (0xc00285bb80) (0xc002c80e60) Create stream I0719 23:43:43.009995 6 log.go:172] (0xc00285bb80) (0xc002c80e60) Stream added, broadcasting: 1 I0719 23:43:43.011871 6 log.go:172] (0xc00285bb80) Reply frame received for 1 I0719 23:43:43.011929 6 log.go:172] (0xc00285bb80) (0xc002c80fa0) Create stream I0719 23:43:43.011941 6 log.go:172] (0xc00285bb80) (0xc002c80fa0) Stream added, broadcasting: 3 I0719 23:43:43.012982 6 log.go:172] (0xc00285bb80) Reply frame received for 3 I0719 23:43:43.013002 6 log.go:172] (0xc00285bb80) (0xc0022400a0) Create stream I0719 23:43:43.013010 6 log.go:172] (0xc00285bb80) (0xc0022400a0) Stream added, broadcasting: 5 I0719 23:43:43.013756 6 log.go:172] (0xc00285bb80) Reply frame received for 5 I0719 23:43:43.087729 6 log.go:172] (0xc00285bb80) Data frame received for 5 I0719 23:43:43.087781 6 log.go:172] (0xc0022400a0) (5) Data frame handling I0719 23:43:43.087811 6 log.go:172] (0xc00285bb80) Data frame received for 3 I0719 23:43:43.087829 6 log.go:172] (0xc002c80fa0) (3) Data frame handling I0719 23:43:43.087849 6 log.go:172] (0xc002c80fa0) (3) Data frame sent I0719 23:43:43.087865 6 log.go:172] (0xc00285bb80) Data frame received for 3 I0719 23:43:43.087881 6 log.go:172] (0xc002c80fa0) (3) Data frame handling I0719 23:43:43.089245 6 log.go:172] (0xc00285bb80) Data frame received for 1 I0719 23:43:43.089268 6 log.go:172] (0xc002c80e60) (1) Data frame handling I0719 23:43:43.089293 6 log.go:172] (0xc002c80e60) (1) Data frame sent I0719 23:43:43.089306 6 log.go:172] (0xc00285bb80) (0xc002c80e60) Stream removed, broadcasting: 1 I0719 23:43:43.089319 6 log.go:172] (0xc00285bb80) Go away received I0719 23:43:43.089455 6 log.go:172] (0xc00285bb80) (0xc002c80e60) Stream removed, broadcasting: 1 I0719 23:43:43.089491 6 log.go:172] (0xc00285bb80) (0xc002c80fa0) Stream removed, broadcasting: 3 I0719 23:43:43.089516 6 log.go:172] (0xc00285bb80) (0xc0022400a0) Stream removed, broadcasting: 5 Jul 19 23:43:43.089: INFO: Exec stderr: "" Jul 19 23:43:43.089: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6512 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:43:43.089: INFO: >>> kubeConfig: /root/.kube/config I0719 23:43:43.124096 6 log.go:172] (0xc0000158c0) (0xc0011b6320) Create stream I0719 23:43:43.124141 6 log.go:172] (0xc0000158c0) (0xc0011b6320) Stream added, broadcasting: 1 I0719 23:43:43.126422 6 log.go:172] (0xc0000158c0) Reply frame received for 1 I0719 23:43:43.126470 6 log.go:172] (0xc0000158c0) (0xc0016600a0) Create stream I0719 23:43:43.126486 6 log.go:172] (0xc0000158c0) (0xc0016600a0) Stream added, broadcasting: 3 I0719 23:43:43.127202 6 log.go:172] (0xc0000158c0) Reply frame received for 3 I0719 23:43:43.127270 6 log.go:172] (0xc0000158c0) (0xc00177c0a0) Create stream I0719 23:43:43.127283 6 log.go:172] (0xc0000158c0) (0xc00177c0a0) Stream added, broadcasting: 5 I0719 23:43:43.127842 6 log.go:172] (0xc0000158c0) Reply frame received for 5 I0719 23:43:43.197736 6 log.go:172] (0xc0000158c0) Data frame received for 3 I0719 23:43:43.197769 6 log.go:172] (0xc0016600a0) (3) Data frame handling I0719 23:43:43.197787 6 log.go:172] (0xc0016600a0) (3) Data frame sent I0719 23:43:43.197897 6 log.go:172] (0xc0000158c0) Data frame received for 5 I0719 23:43:43.197949 6 log.go:172] (0xc00177c0a0) (5) Data frame handling I0719 23:43:43.197981 6 log.go:172] (0xc0000158c0) Data frame received for 3 I0719 23:43:43.198000 6 log.go:172] (0xc0016600a0) (3) Data frame handling I0719 23:43:43.199975 6 log.go:172] (0xc0000158c0) Data frame received for 1 I0719 23:43:43.199992 6 log.go:172] (0xc0011b6320) (1) Data frame handling I0719 23:43:43.200000 6 log.go:172] (0xc0011b6320) (1) Data frame sent I0719 23:43:43.200103 6 log.go:172] (0xc0000158c0) (0xc0011b6320) Stream removed, broadcasting: 1 I0719 23:43:43.200184 6 log.go:172] (0xc0000158c0) Go away received I0719 23:43:43.200350 6 log.go:172] (0xc0000158c0) (0xc0011b6320) Stream removed, broadcasting: 1 I0719 23:43:43.200384 6 log.go:172] (0xc0000158c0) (0xc0016600a0) Stream removed, broadcasting: 3 I0719 23:43:43.200407 6 log.go:172] (0xc0000158c0) (0xc00177c0a0) Stream removed, broadcasting: 5 Jul 19 23:43:43.200: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:43:43.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6512" for this suite. Jul 19 23:44:27.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:44:27.365: INFO: namespace e2e-kubelet-etc-hosts-6512 deletion completed in 44.160317241s • [SLOW TEST:59.910 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:44:27.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 19 23:44:27.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-6038' Jul 19 23:44:27.551: INFO: stderr: "" Jul 19 23:44:27.551: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jul 19 23:44:32.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-6038 -o json' Jul 19 23:44:33.146: INFO: stderr: "" Jul 19 23:44:33.146: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-19T23:44:27Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-6038\",\n \"resourceVersion\": \"36004\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6038/pods/e2e-test-nginx-pod\",\n \"uid\": \"b0396976-6646-4bf2-923d-68e715936cd0\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-jcg8t\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-jcg8t\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-jcg8t\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-19T23:44:27Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-19T23:44:30Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-19T23:44:30Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-19T23:44:27Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://905b1412e95cb2aae21fa9a432c748422c7e516a7dfd1675230c3845bef2bab0\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-19T23:44:30Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.8\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-19T23:44:27Z\"\n }\n}\n" STEP: replace the image in the pod Jul 19 23:44:33.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6038' Jul 19 23:44:34.056: INFO: stderr: "" Jul 19 23:44:34.056: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jul 19 23:44:34.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6038' Jul 19 23:44:41.678: INFO: stderr: "" Jul 19 23:44:41.678: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:44:41.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6038" for this suite. Jul 19 23:44:49.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:44:49.927: INFO: namespace kubectl-6038 deletion completed in 8.228199614s • [SLOW TEST:22.560 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:44:49.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-c0a8284f-1091-47dd-9bc0-568adc19708c STEP: Creating a pod to test consume secrets Jul 19 23:44:50.524: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f27047e7-bc48-46e7-a9ce-244168224d31" in namespace "projected-1155" to be "success or failure" Jul 19 23:44:50.545: INFO: Pod "pod-projected-secrets-f27047e7-bc48-46e7-a9ce-244168224d31": Phase="Pending", Reason="", readiness=false. Elapsed: 20.936564ms Jul 19 23:44:52.550: INFO: Pod "pod-projected-secrets-f27047e7-bc48-46e7-a9ce-244168224d31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025332139s Jul 19 23:44:54.554: INFO: Pod "pod-projected-secrets-f27047e7-bc48-46e7-a9ce-244168224d31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029931917s STEP: Saw pod success Jul 19 23:44:54.554: INFO: Pod "pod-projected-secrets-f27047e7-bc48-46e7-a9ce-244168224d31" satisfied condition "success or failure" Jul 19 23:44:54.559: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-f27047e7-bc48-46e7-a9ce-244168224d31 container secret-volume-test: STEP: delete the pod Jul 19 23:44:54.577: INFO: Waiting for pod pod-projected-secrets-f27047e7-bc48-46e7-a9ce-244168224d31 to disappear Jul 19 23:44:54.595: INFO: Pod pod-projected-secrets-f27047e7-bc48-46e7-a9ce-244168224d31 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:44:54.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1155" for this suite. Jul 19 23:45:00.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:45:00.676: INFO: namespace projected-1155 deletion completed in 6.077873373s • [SLOW TEST:10.749 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:45:00.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-8clt STEP: Creating a pod to test atomic-volume-subpath Jul 19 23:45:00.950: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8clt" in namespace "subpath-3811" to be "success or failure" Jul 19 23:45:00.953: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.982645ms Jul 19 23:45:02.997: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047742467s Jul 19 23:45:05.001: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Running", Reason="", readiness=true. Elapsed: 4.050943053s Jul 19 23:45:07.005: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Running", Reason="", readiness=true. Elapsed: 6.055637293s Jul 19 23:45:09.304: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Running", Reason="", readiness=true. Elapsed: 8.354017938s Jul 19 23:45:11.307: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Running", Reason="", readiness=true. Elapsed: 10.357769804s Jul 19 23:45:13.371: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Running", Reason="", readiness=true. Elapsed: 12.42148246s Jul 19 23:45:15.375: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Running", Reason="", readiness=true. Elapsed: 14.42513384s Jul 19 23:45:17.379: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Running", Reason="", readiness=true. Elapsed: 16.429121811s Jul 19 23:45:19.383: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Running", Reason="", readiness=true. Elapsed: 18.433156462s Jul 19 23:45:21.387: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Running", Reason="", readiness=true. Elapsed: 20.437469516s Jul 19 23:45:23.391: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Running", Reason="", readiness=true. Elapsed: 22.441759554s Jul 19 23:45:25.514: INFO: Pod "pod-subpath-test-configmap-8clt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.564262763s STEP: Saw pod success Jul 19 23:45:25.514: INFO: Pod "pod-subpath-test-configmap-8clt" satisfied condition "success or failure" Jul 19 23:45:25.553: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-8clt container test-container-subpath-configmap-8clt: STEP: delete the pod Jul 19 23:45:25.749: INFO: Waiting for pod pod-subpath-test-configmap-8clt to disappear Jul 19 23:45:25.991: INFO: Pod pod-subpath-test-configmap-8clt no longer exists STEP: Deleting pod pod-subpath-test-configmap-8clt Jul 19 23:45:25.991: INFO: Deleting pod "pod-subpath-test-configmap-8clt" in namespace "subpath-3811" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:45:25.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3811" for this suite. Jul 19 23:45:32.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:45:32.180: INFO: namespace subpath-3811 deletion completed in 6.182999356s • [SLOW TEST:31.503 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:45:32.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1040 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 19 23:45:32.230: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 19 23:46:01.077: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1040 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:46:01.077: INFO: >>> kubeConfig: /root/.kube/config I0719 23:46:01.113157 6 log.go:172] (0xc000450370) (0xc002b96640) Create stream I0719 23:46:01.113187 6 log.go:172] (0xc000450370) (0xc002b96640) Stream added, broadcasting: 1 I0719 23:46:01.115655 6 log.go:172] (0xc000450370) Reply frame received for 1 I0719 23:46:01.115686 6 log.go:172] (0xc000450370) (0xc001504000) Create stream I0719 23:46:01.115697 6 log.go:172] (0xc000450370) (0xc001504000) Stream added, broadcasting: 3 I0719 23:46:01.117041 6 log.go:172] (0xc000450370) Reply frame received for 3 I0719 23:46:01.117097 6 log.go:172] (0xc000450370) (0xc001504140) Create stream I0719 23:46:01.117124 6 log.go:172] (0xc000450370) (0xc001504140) Stream added, broadcasting: 5 I0719 23:46:01.118415 6 log.go:172] (0xc000450370) Reply frame received for 5 I0719 23:46:02.169904 6 log.go:172] (0xc000450370) Data frame received for 3 I0719 23:46:02.169967 6 log.go:172] (0xc001504000) (3) Data frame handling I0719 23:46:02.169981 6 log.go:172] (0xc001504000) (3) Data frame sent I0719 23:46:02.170043 6 log.go:172] (0xc000450370) Data frame received for 5 I0719 23:46:02.170084 6 log.go:172] (0xc001504140) (5) Data frame handling I0719 23:46:02.170121 6 log.go:172] (0xc000450370) Data frame received for 3 I0719 23:46:02.170145 6 log.go:172] (0xc001504000) (3) Data frame handling I0719 23:46:02.172382 6 log.go:172] (0xc000450370) Data frame received for 1 I0719 23:46:02.172419 6 log.go:172] (0xc002b96640) (1) Data frame handling I0719 23:46:02.172454 6 log.go:172] (0xc002b96640) (1) Data frame sent I0719 23:46:02.172669 6 log.go:172] (0xc000450370) (0xc002b96640) Stream removed, broadcasting: 1 I0719 23:46:02.172892 6 log.go:172] (0xc000450370) Go away received I0719 23:46:02.172948 6 log.go:172] (0xc000450370) (0xc002b96640) Stream removed, broadcasting: 1 I0719 23:46:02.172976 6 log.go:172] (0xc000450370) (0xc001504000) Stream removed, broadcasting: 3 I0719 23:46:02.172990 6 log.go:172] (0xc000450370) (0xc001504140) Stream removed, broadcasting: 5 Jul 19 23:46:02.173: INFO: Found all expected endpoints: [netserver-0] Jul 19 23:46:02.177: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.244 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1040 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 19 23:46:02.177: INFO: >>> kubeConfig: /root/.kube/config I0719 23:46:02.207684 6 log.go:172] (0xc002478370) (0xc0004c2be0) Create stream I0719 23:46:02.207700 6 log.go:172] (0xc002478370) (0xc0004c2be0) Stream added, broadcasting: 1 I0719 23:46:02.210157 6 log.go:172] (0xc002478370) Reply frame received for 1 I0719 23:46:02.210202 6 log.go:172] (0xc002478370) (0xc002b966e0) Create stream I0719 23:46:02.210217 6 log.go:172] (0xc002478370) (0xc002b966e0) Stream added, broadcasting: 3 I0719 23:46:02.211271 6 log.go:172] (0xc002478370) Reply frame received for 3 I0719 23:46:02.211322 6 log.go:172] (0xc002478370) (0xc002b96780) Create stream I0719 23:46:02.211342 6 log.go:172] (0xc002478370) (0xc002b96780) Stream added, broadcasting: 5 I0719 23:46:02.212219 6 log.go:172] (0xc002478370) Reply frame received for 5 I0719 23:46:03.267477 6 log.go:172] (0xc002478370) Data frame received for 5 I0719 23:46:03.267555 6 log.go:172] (0xc002b96780) (5) Data frame handling I0719 23:46:03.267631 6 log.go:172] (0xc002478370) Data frame received for 3 I0719 23:46:03.267656 6 log.go:172] (0xc002b966e0) (3) Data frame handling I0719 23:46:03.267679 6 log.go:172] (0xc002b966e0) (3) Data frame sent I0719 23:46:03.267698 6 log.go:172] (0xc002478370) Data frame received for 3 I0719 23:46:03.267714 6 log.go:172] (0xc002b966e0) (3) Data frame handling I0719 23:46:03.270214 6 log.go:172] (0xc002478370) Data frame received for 1 I0719 23:46:03.270253 6 log.go:172] (0xc0004c2be0) (1) Data frame handling I0719 23:46:03.270279 6 log.go:172] (0xc0004c2be0) (1) Data frame sent I0719 23:46:03.270302 6 log.go:172] (0xc002478370) (0xc0004c2be0) Stream removed, broadcasting: 1 I0719 23:46:03.270329 6 log.go:172] (0xc002478370) Go away received I0719 23:46:03.270432 6 log.go:172] (0xc002478370) (0xc0004c2be0) Stream removed, broadcasting: 1 I0719 23:46:03.270450 6 log.go:172] (0xc002478370) (0xc002b966e0) Stream removed, broadcasting: 3 I0719 23:46:03.270459 6 log.go:172] (0xc002478370) (0xc002b96780) Stream removed, broadcasting: 5 Jul 19 23:46:03.270: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:46:03.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1040" for this suite. Jul 19 23:46:27.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:46:27.553: INFO: namespace pod-network-test-1040 deletion completed in 24.277407018s • [SLOW TEST:55.373 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:46:27.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-06b362b2-e784-4aa2-ab77-8d60705063e3 STEP: Creating a pod to test consume configMaps Jul 19 23:46:27.754: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbb4cc78-3f95-4d09-b627-d3e4e9f10a54" in namespace "configmap-3531" to be "success or failure" Jul 19 23:46:27.789: INFO: Pod "pod-configmaps-dbb4cc78-3f95-4d09-b627-d3e4e9f10a54": Phase="Pending", Reason="", readiness=false. Elapsed: 35.258057ms Jul 19 23:46:29.849: INFO: Pod "pod-configmaps-dbb4cc78-3f95-4d09-b627-d3e4e9f10a54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095328883s Jul 19 23:46:32.173: INFO: Pod "pod-configmaps-dbb4cc78-3f95-4d09-b627-d3e4e9f10a54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.418853625s Jul 19 23:46:34.177: INFO: Pod "pod-configmaps-dbb4cc78-3f95-4d09-b627-d3e4e9f10a54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.422883634s STEP: Saw pod success Jul 19 23:46:34.177: INFO: Pod "pod-configmaps-dbb4cc78-3f95-4d09-b627-d3e4e9f10a54" satisfied condition "success or failure" Jul 19 23:46:34.179: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-dbb4cc78-3f95-4d09-b627-d3e4e9f10a54 container configmap-volume-test: STEP: delete the pod Jul 19 23:46:34.282: INFO: Waiting for pod pod-configmaps-dbb4cc78-3f95-4d09-b627-d3e4e9f10a54 to disappear Jul 19 23:46:34.296: INFO: Pod pod-configmaps-dbb4cc78-3f95-4d09-b627-d3e4e9f10a54 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:46:34.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3531" for this suite. Jul 19 23:46:42.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:46:42.728: INFO: namespace configmap-3531 deletion completed in 8.428746264s • [SLOW TEST:15.176 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:46:42.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7998.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7998.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7998.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7998.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7998.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7998.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7998.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7998.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7998.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7998.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 14.16.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.16.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.16.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.16.14_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7998.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7998.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7998.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7998.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7998.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7998.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7998.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7998.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7998.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7998.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7998.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 14.16.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.16.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.16.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.16.14_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 19 23:46:53.866: INFO: Unable to read wheezy_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:53.870: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:53.873: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:53.876: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:53.895: INFO: Unable to read jessie_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:53.898: INFO: Unable to read jessie_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:53.901: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:53.903: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:53.921: INFO: Lookups using dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25 failed for: [wheezy_udp@dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_udp@dns-test-service.dns-7998.svc.cluster.local jessie_tcp@dns-test-service.dns-7998.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local] Jul 19 23:46:58.925: INFO: Unable to read wheezy_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:58.928: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:58.931: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:58.933: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:58.950: INFO: Unable to read jessie_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:58.952: INFO: Unable to read jessie_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:58.955: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:58.958: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:46:58.973: INFO: Lookups using dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25 failed for: [wheezy_udp@dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_udp@dns-test-service.dns-7998.svc.cluster.local jessie_tcp@dns-test-service.dns-7998.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local] Jul 19 23:47:03.926: INFO: Unable to read wheezy_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:03.930: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:03.932: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:03.936: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:03.967: INFO: Unable to read jessie_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:03.970: INFO: Unable to read jessie_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:03.973: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:03.976: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:03.994: INFO: Lookups using dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25 failed for: [wheezy_udp@dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_udp@dns-test-service.dns-7998.svc.cluster.local jessie_tcp@dns-test-service.dns-7998.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local] Jul 19 23:47:08.926: INFO: Unable to read wheezy_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:08.930: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:08.933: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:08.936: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:08.957: INFO: Unable to read jessie_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:08.960: INFO: Unable to read jessie_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:08.963: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:08.966: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:08.984: INFO: Lookups using dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25 failed for: [wheezy_udp@dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_udp@dns-test-service.dns-7998.svc.cluster.local jessie_tcp@dns-test-service.dns-7998.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local] Jul 19 23:47:13.926: INFO: Unable to read wheezy_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:13.930: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:13.933: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:13.937: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:13.957: INFO: Unable to read jessie_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:13.959: INFO: Unable to read jessie_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:13.961: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:13.964: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:13.982: INFO: Lookups using dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25 failed for: [wheezy_udp@dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_udp@dns-test-service.dns-7998.svc.cluster.local jessie_tcp@dns-test-service.dns-7998.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local] Jul 19 23:47:18.926: INFO: Unable to read wheezy_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:18.929: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:18.931: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:18.934: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:18.960: INFO: Unable to read jessie_udp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:18.962: INFO: Unable to read jessie_tcp@dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:18.965: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:18.967: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local from pod dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25: the server could not find the requested resource (get pods dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25) Jul 19 23:47:18.985: INFO: Lookups using dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25 failed for: [wheezy_udp@dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@dns-test-service.dns-7998.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_udp@dns-test-service.dns-7998.svc.cluster.local jessie_tcp@dns-test-service.dns-7998.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7998.svc.cluster.local] Jul 19 23:47:23.993: INFO: DNS probes using dns-7998/dns-test-0b50dd3d-f518-47f8-ba14-5b99304a1a25 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:47:24.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7998" for this suite. Jul 19 23:47:31.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:47:31.110: INFO: namespace dns-7998 deletion completed in 6.218403629s • [SLOW TEST:48.381 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:47:31.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-e7683fe3-4f1a-4589-96c7-8cb6f77430b6 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-e7683fe3-4f1a-4589-96c7-8cb6f77430b6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:49:04.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3816" for this suite. Jul 19 23:49:28.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:49:28.567: INFO: namespace projected-3816 deletion completed in 24.133822458s • [SLOW TEST:117.456 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:49:28.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 19 23:49:28.667: INFO: Waiting up to 5m0s for pod "pod-a7a35f3e-f32d-4751-8962-d1dbbc00d341" in namespace "emptydir-3877" to be "success or failure" Jul 19 23:49:28.703: INFO: Pod "pod-a7a35f3e-f32d-4751-8962-d1dbbc00d341": Phase="Pending", Reason="", readiness=false. Elapsed: 35.189952ms Jul 19 23:49:30.707: INFO: Pod "pod-a7a35f3e-f32d-4751-8962-d1dbbc00d341": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040022125s Jul 19 23:49:32.906: INFO: Pod "pod-a7a35f3e-f32d-4751-8962-d1dbbc00d341": Phase="Running", Reason="", readiness=true. Elapsed: 4.238972444s Jul 19 23:49:34.910: INFO: Pod "pod-a7a35f3e-f32d-4751-8962-d1dbbc00d341": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.243068919s STEP: Saw pod success Jul 19 23:49:34.910: INFO: Pod "pod-a7a35f3e-f32d-4751-8962-d1dbbc00d341" satisfied condition "success or failure" Jul 19 23:49:34.913: INFO: Trying to get logs from node iruya-worker pod pod-a7a35f3e-f32d-4751-8962-d1dbbc00d341 container test-container: STEP: delete the pod Jul 19 23:49:35.045: INFO: Waiting for pod pod-a7a35f3e-f32d-4751-8962-d1dbbc00d341 to disappear Jul 19 23:49:35.061: INFO: Pod pod-a7a35f3e-f32d-4751-8962-d1dbbc00d341 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:49:35.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3877" for this suite. Jul 19 23:49:41.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:49:41.179: INFO: namespace emptydir-3877 deletion completed in 6.113765808s • [SLOW TEST:12.612 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:49:41.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-a87c1111-8bfc-488f-a3fe-b3a663df988a STEP: Creating a pod to test consume secrets Jul 19 23:49:41.308: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-41431ed7-a8c1-41c8-b325-d0880120eca0" in namespace "projected-4970" to be "success or failure" Jul 19 23:49:41.311: INFO: Pod "pod-projected-secrets-41431ed7-a8c1-41c8-b325-d0880120eca0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.68185ms Jul 19 23:49:43.655: INFO: Pod "pod-projected-secrets-41431ed7-a8c1-41c8-b325-d0880120eca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346958119s Jul 19 23:49:45.661: INFO: Pod "pod-projected-secrets-41431ed7-a8c1-41c8-b325-d0880120eca0": Phase="Running", Reason="", readiness=true. Elapsed: 4.353307615s Jul 19 23:49:47.665: INFO: Pod "pod-projected-secrets-41431ed7-a8c1-41c8-b325-d0880120eca0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.35701021s STEP: Saw pod success Jul 19 23:49:47.665: INFO: Pod "pod-projected-secrets-41431ed7-a8c1-41c8-b325-d0880120eca0" satisfied condition "success or failure" Jul 19 23:49:47.667: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-41431ed7-a8c1-41c8-b325-d0880120eca0 container projected-secret-volume-test: STEP: delete the pod Jul 19 23:49:48.063: INFO: Waiting for pod pod-projected-secrets-41431ed7-a8c1-41c8-b325-d0880120eca0 to disappear Jul 19 23:49:48.102: INFO: Pod pod-projected-secrets-41431ed7-a8c1-41c8-b325-d0880120eca0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:49:48.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4970" for this suite. Jul 19 23:49:54.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:49:54.459: INFO: namespace projected-4970 deletion completed in 6.149859513s • [SLOW TEST:13.279 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:49:54.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jul 19 23:49:54.561: INFO: Waiting up to 5m0s for pod "downward-api-c80065bf-6b1d-41cd-97e7-9e1bd731e6bc" in namespace "downward-api-9776" to be "success or failure" Jul 19 23:49:54.604: INFO: Pod "downward-api-c80065bf-6b1d-41cd-97e7-9e1bd731e6bc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.32093ms Jul 19 23:49:56.608: INFO: Pod "downward-api-c80065bf-6b1d-41cd-97e7-9e1bd731e6bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047241565s Jul 19 23:49:58.619: INFO: Pod "downward-api-c80065bf-6b1d-41cd-97e7-9e1bd731e6bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057497859s STEP: Saw pod success Jul 19 23:49:58.619: INFO: Pod "downward-api-c80065bf-6b1d-41cd-97e7-9e1bd731e6bc" satisfied condition "success or failure" Jul 19 23:49:58.622: INFO: Trying to get logs from node iruya-worker pod downward-api-c80065bf-6b1d-41cd-97e7-9e1bd731e6bc container dapi-container: STEP: delete the pod Jul 19 23:49:58.642: INFO: Waiting for pod downward-api-c80065bf-6b1d-41cd-97e7-9e1bd731e6bc to disappear Jul 19 23:49:58.647: INFO: Pod downward-api-c80065bf-6b1d-41cd-97e7-9e1bd731e6bc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:49:58.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9776" for this suite. Jul 19 23:50:04.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:50:04.741: INFO: namespace downward-api-9776 deletion completed in 6.090069887s • [SLOW TEST:10.281 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:50:04.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 19 23:50:04.860: INFO: Waiting up to 5m0s for pod "pod-6c1b0c61-06ed-46e7-abbc-69e0c96c2213" in namespace "emptydir-9995" to be "success or failure" Jul 19 23:50:04.881: INFO: Pod "pod-6c1b0c61-06ed-46e7-abbc-69e0c96c2213": Phase="Pending", Reason="", readiness=false. Elapsed: 21.247428ms Jul 19 23:50:06.984: INFO: Pod "pod-6c1b0c61-06ed-46e7-abbc-69e0c96c2213": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12413033s Jul 19 23:50:08.988: INFO: Pod "pod-6c1b0c61-06ed-46e7-abbc-69e0c96c2213": Phase="Running", Reason="", readiness=true. Elapsed: 4.128414805s Jul 19 23:50:10.992: INFO: Pod "pod-6c1b0c61-06ed-46e7-abbc-69e0c96c2213": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131861532s STEP: Saw pod success Jul 19 23:50:10.992: INFO: Pod "pod-6c1b0c61-06ed-46e7-abbc-69e0c96c2213" satisfied condition "success or failure" Jul 19 23:50:10.994: INFO: Trying to get logs from node iruya-worker2 pod pod-6c1b0c61-06ed-46e7-abbc-69e0c96c2213 container test-container: STEP: delete the pod Jul 19 23:50:11.034: INFO: Waiting for pod pod-6c1b0c61-06ed-46e7-abbc-69e0c96c2213 to disappear Jul 19 23:50:11.050: INFO: Pod pod-6c1b0c61-06ed-46e7-abbc-69e0c96c2213 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:50:11.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9995" for this suite. Jul 19 23:50:19.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:50:19.187: INFO: namespace emptydir-9995 deletion completed in 8.132773809s • [SLOW TEST:14.446 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:50:19.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jul 19 23:50:19.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9394' Jul 19 23:50:19.528: INFO: stderr: "" Jul 19 23:50:19.528: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 19 23:50:20.532: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:50:20.532: INFO: Found 0 / 1 Jul 19 23:50:21.547: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:50:21.547: INFO: Found 0 / 1 Jul 19 23:50:22.559: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:50:22.559: INFO: Found 0 / 1 Jul 19 23:50:23.751: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:50:23.751: INFO: Found 0 / 1 Jul 19 23:50:24.532: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:50:24.532: INFO: Found 0 / 1 Jul 19 23:50:25.532: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:50:25.532: INFO: Found 0 / 1 Jul 19 23:50:26.531: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:50:26.532: INFO: Found 1 / 1 Jul 19 23:50:26.532: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jul 19 23:50:26.534: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:50:26.534: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 19 23:50:26.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-pqw52 --namespace=kubectl-9394 -p {"metadata":{"annotations":{"x":"y"}}}' Jul 19 23:50:26.629: INFO: stderr: "" Jul 19 23:50:26.629: INFO: stdout: "pod/redis-master-pqw52 patched\n" STEP: checking annotations Jul 19 23:50:26.635: INFO: Selector matched 1 pods for map[app:redis] Jul 19 23:50:26.635: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:50:26.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9394" for this suite. Jul 19 23:50:48.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:50:48.742: INFO: namespace kubectl-9394 deletion completed in 22.105244857s • [SLOW TEST:29.555 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:50:48.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 19 23:50:48.794: INFO: Waiting up to 5m0s for pod "pod-545eb472-c274-4129-a088-871e3cad4242" in namespace "emptydir-1471" to be "success or failure" Jul 19 23:50:48.812: INFO: Pod "pod-545eb472-c274-4129-a088-871e3cad4242": Phase="Pending", Reason="", readiness=false. Elapsed: 17.947054ms Jul 19 23:50:50.816: INFO: Pod "pod-545eb472-c274-4129-a088-871e3cad4242": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022354848s Jul 19 23:50:52.821: INFO: Pod "pod-545eb472-c274-4129-a088-871e3cad4242": Phase="Running", Reason="", readiness=true. Elapsed: 4.026548953s Jul 19 23:50:54.824: INFO: Pod "pod-545eb472-c274-4129-a088-871e3cad4242": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030408207s STEP: Saw pod success Jul 19 23:50:54.825: INFO: Pod "pod-545eb472-c274-4129-a088-871e3cad4242" satisfied condition "success or failure" Jul 19 23:50:54.827: INFO: Trying to get logs from node iruya-worker2 pod pod-545eb472-c274-4129-a088-871e3cad4242 container test-container: STEP: delete the pod Jul 19 23:50:54.898: INFO: Waiting for pod pod-545eb472-c274-4129-a088-871e3cad4242 to disappear Jul 19 23:50:54.972: INFO: Pod pod-545eb472-c274-4129-a088-871e3cad4242 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:50:54.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1471" for this suite. Jul 19 23:51:01.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:51:01.115: INFO: namespace emptydir-1471 deletion completed in 6.13855767s • [SLOW TEST:12.373 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:51:01.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 19 23:51:02.388: INFO: Waiting up to 5m0s for pod "pod-d79a097a-fd4a-4714-84d7-ff5753d5cc17" in namespace "emptydir-5752" to be "success or failure" Jul 19 23:51:02.453: INFO: Pod "pod-d79a097a-fd4a-4714-84d7-ff5753d5cc17": Phase="Pending", Reason="", readiness=false. Elapsed: 64.363692ms Jul 19 23:51:04.458: INFO: Pod "pod-d79a097a-fd4a-4714-84d7-ff5753d5cc17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069255656s Jul 19 23:51:06.462: INFO: Pod "pod-d79a097a-fd4a-4714-84d7-ff5753d5cc17": Phase="Running", Reason="", readiness=true. Elapsed: 4.073910059s Jul 19 23:51:08.467: INFO: Pod "pod-d79a097a-fd4a-4714-84d7-ff5753d5cc17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078401326s STEP: Saw pod success Jul 19 23:51:08.467: INFO: Pod "pod-d79a097a-fd4a-4714-84d7-ff5753d5cc17" satisfied condition "success or failure" Jul 19 23:51:08.470: INFO: Trying to get logs from node iruya-worker pod pod-d79a097a-fd4a-4714-84d7-ff5753d5cc17 container test-container: STEP: delete the pod Jul 19 23:51:08.509: INFO: Waiting for pod pod-d79a097a-fd4a-4714-84d7-ff5753d5cc17 to disappear Jul 19 23:51:08.590: INFO: Pod pod-d79a097a-fd4a-4714-84d7-ff5753d5cc17 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:51:08.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5752" for this suite. Jul 19 23:51:14.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:51:14.785: INFO: namespace emptydir-5752 deletion completed in 6.190022034s • [SLOW TEST:13.669 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:51:14.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-0eb289cf-88f5-4caa-b92d-632765cc90f5 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:51:21.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-189" for this suite. Jul 19 23:51:43.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:51:43.371: INFO: namespace configmap-189 deletion completed in 22.090220651s • [SLOW TEST:28.586 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:51:43.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2478 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-2478 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2478 Jul 19 23:51:43.447: INFO: Found 0 stateful pods, waiting for 1 Jul 19 23:51:53.451: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 19 23:51:53.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2478 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 19 23:51:53.715: INFO: stderr: "I0719 23:51:53.603388 595 log.go:172] (0xc000116d10) (0xc00069c780) Create stream\nI0719 23:51:53.603479 595 log.go:172] (0xc000116d10) (0xc00069c780) Stream added, broadcasting: 1\nI0719 23:51:53.606083 595 log.go:172] (0xc000116d10) Reply frame received for 1\nI0719 23:51:53.606126 595 log.go:172] (0xc000116d10) (0xc0007b60a0) Create stream\nI0719 23:51:53.606135 595 log.go:172] (0xc000116d10) (0xc0007b60a0) Stream added, broadcasting: 3\nI0719 23:51:53.607120 595 log.go:172] (0xc000116d10) Reply frame received for 3\nI0719 23:51:53.607165 595 log.go:172] (0xc000116d10) (0xc00069c820) Create stream\nI0719 23:51:53.607178 595 log.go:172] (0xc000116d10) (0xc00069c820) Stream added, broadcasting: 5\nI0719 23:51:53.608261 595 log.go:172] (0xc000116d10) Reply frame received for 5\nI0719 23:51:53.679301 595 log.go:172] (0xc000116d10) Data frame received for 5\nI0719 23:51:53.679327 595 log.go:172] (0xc00069c820) (5) Data frame handling\nI0719 23:51:53.679339 595 log.go:172] (0xc00069c820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0719 23:51:53.709744 595 log.go:172] (0xc000116d10) Data frame received for 3\nI0719 23:51:53.709773 595 log.go:172] (0xc0007b60a0) (3) Data frame handling\nI0719 23:51:53.709785 595 log.go:172] (0xc0007b60a0) (3) Data frame sent\nI0719 23:51:53.709795 595 log.go:172] (0xc000116d10) Data frame received for 3\nI0719 23:51:53.709804 595 log.go:172] (0xc0007b60a0) (3) Data frame handling\nI0719 23:51:53.709818 595 log.go:172] (0xc000116d10) Data frame received for 5\nI0719 23:51:53.709851 595 log.go:172] (0xc00069c820) (5) Data frame handling\nI0719 23:51:53.711613 595 log.go:172] (0xc000116d10) Data frame received for 1\nI0719 23:51:53.711635 595 log.go:172] (0xc00069c780) (1) Data frame handling\nI0719 23:51:53.711642 595 log.go:172] (0xc00069c780) (1) Data frame sent\nI0719 23:51:53.711736 595 log.go:172] (0xc000116d10) (0xc00069c780) Stream removed, broadcasting: 1\nI0719 23:51:53.711789 595 log.go:172] (0xc000116d10) Go away received\nI0719 23:51:53.711951 595 log.go:172] (0xc000116d10) (0xc00069c780) Stream removed, broadcasting: 1\nI0719 23:51:53.711961 595 log.go:172] (0xc000116d10) (0xc0007b60a0) Stream removed, broadcasting: 3\nI0719 23:51:53.711966 595 log.go:172] (0xc000116d10) (0xc00069c820) Stream removed, broadcasting: 5\n" Jul 19 23:51:53.715: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 19 23:51:53.715: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 19 23:51:53.719: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 19 23:52:03.722: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 19 23:52:03.722: INFO: Waiting for statefulset status.replicas updated to 0 Jul 19 23:52:03.757: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 23:52:03.757: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:43 +0000 UTC }] Jul 19 23:52:03.757: INFO: Jul 19 23:52:03.757: INFO: StatefulSet ss has not reached scale 3, at 1 Jul 19 23:52:04.761: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.97292573s Jul 19 23:52:05.950: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968676687s Jul 19 23:52:07.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.779916204s Jul 19 23:52:08.039: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.701270641s Jul 19 23:52:09.058: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.690693404s Jul 19 23:52:10.063: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.672309897s Jul 19 23:52:11.082: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.666836732s Jul 19 23:52:12.088: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.648359305s Jul 19 23:52:13.094: INFO: Verifying statefulset ss doesn't scale past 3 for another 641.942849ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2478 Jul 19 23:52:14.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2478 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 19 23:52:14.307: INFO: stderr: "I0719 23:52:14.226142 616 log.go:172] (0xc000a18420) (0xc000968780) Create stream\nI0719 23:52:14.226220 616 log.go:172] (0xc000a18420) (0xc000968780) Stream added, broadcasting: 1\nI0719 23:52:14.228718 616 log.go:172] (0xc000a18420) Reply frame received for 1\nI0719 23:52:14.228842 616 log.go:172] (0xc000a18420) (0xc0002f0000) Create stream\nI0719 23:52:14.228853 616 log.go:172] (0xc000a18420) (0xc0002f0000) Stream added, broadcasting: 3\nI0719 23:52:14.229745 616 log.go:172] (0xc000a18420) Reply frame received for 3\nI0719 23:52:14.229773 616 log.go:172] (0xc000a18420) (0xc00065c280) Create stream\nI0719 23:52:14.229785 616 log.go:172] (0xc000a18420) (0xc00065c280) Stream added, broadcasting: 5\nI0719 23:52:14.230668 616 log.go:172] (0xc000a18420) Reply frame received for 5\nI0719 23:52:14.300047 616 log.go:172] (0xc000a18420) Data frame received for 5\nI0719 23:52:14.300088 616 log.go:172] (0xc00065c280) (5) Data frame handling\nI0719 23:52:14.300107 616 log.go:172] (0xc00065c280) (5) Data frame sent\nI0719 23:52:14.300121 616 log.go:172] (0xc000a18420) Data frame received for 5\nI0719 23:52:14.300133 616 log.go:172] (0xc00065c280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0719 23:52:14.300154 616 log.go:172] (0xc000a18420) Data frame received for 3\nI0719 23:52:14.300181 616 log.go:172] (0xc0002f0000) (3) Data frame handling\nI0719 23:52:14.300198 616 log.go:172] (0xc0002f0000) (3) Data frame sent\nI0719 23:52:14.300214 616 log.go:172] (0xc000a18420) Data frame received for 3\nI0719 23:52:14.300226 616 log.go:172] (0xc0002f0000) (3) Data frame handling\nI0719 23:52:14.301829 616 log.go:172] (0xc000a18420) Data frame received for 1\nI0719 23:52:14.301846 616 log.go:172] (0xc000968780) (1) Data frame handling\nI0719 23:52:14.301858 616 log.go:172] (0xc000968780) (1) Data frame sent\nI0719 23:52:14.301874 616 log.go:172] (0xc000a18420) (0xc000968780) Stream removed, broadcasting: 1\nI0719 23:52:14.301888 616 log.go:172] (0xc000a18420) Go away received\nI0719 23:52:14.302298 616 log.go:172] (0xc000a18420) (0xc000968780) Stream removed, broadcasting: 1\nI0719 23:52:14.302332 616 log.go:172] (0xc000a18420) (0xc0002f0000) Stream removed, broadcasting: 3\nI0719 23:52:14.302345 616 log.go:172] (0xc000a18420) (0xc00065c280) Stream removed, broadcasting: 5\n" Jul 19 23:52:14.307: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 19 23:52:14.307: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 19 23:52:14.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2478 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 19 23:52:14.507: INFO: stderr: "I0719 23:52:14.425646 635 log.go:172] (0xc00096e840) (0xc000762aa0) Create stream\nI0719 23:52:14.425703 635 log.go:172] (0xc00096e840) (0xc000762aa0) Stream added, broadcasting: 1\nI0719 23:52:14.430239 635 log.go:172] (0xc00096e840) Reply frame received for 1\nI0719 23:52:14.430269 635 log.go:172] (0xc00096e840) (0xc00003a320) Create stream\nI0719 23:52:14.430278 635 log.go:172] (0xc00096e840) (0xc00003a320) Stream added, broadcasting: 3\nI0719 23:52:14.431046 635 log.go:172] (0xc00096e840) Reply frame received for 3\nI0719 23:52:14.431084 635 log.go:172] (0xc00096e840) (0xc0007620a0) Create stream\nI0719 23:52:14.431095 635 log.go:172] (0xc00096e840) (0xc0007620a0) Stream added, broadcasting: 5\nI0719 23:52:14.431955 635 log.go:172] (0xc00096e840) Reply frame received for 5\nI0719 23:52:14.500208 635 log.go:172] (0xc00096e840) Data frame received for 3\nI0719 23:52:14.500255 635 log.go:172] (0xc00003a320) (3) Data frame handling\nI0719 23:52:14.500281 635 log.go:172] (0xc00003a320) (3) Data frame sent\nI0719 23:52:14.500298 635 log.go:172] (0xc00096e840) Data frame received for 3\nI0719 23:52:14.500313 635 log.go:172] (0xc00003a320) (3) Data frame handling\nI0719 23:52:14.500363 635 log.go:172] (0xc00096e840) Data frame received for 5\nI0719 23:52:14.500391 635 log.go:172] (0xc0007620a0) (5) Data frame handling\nI0719 23:52:14.500418 635 log.go:172] (0xc0007620a0) (5) Data frame sent\nI0719 23:52:14.500433 635 log.go:172] (0xc00096e840) Data frame received for 5\nI0719 23:52:14.500446 635 log.go:172] (0xc0007620a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0719 23:52:14.502446 635 log.go:172] (0xc00096e840) Data frame received for 1\nI0719 23:52:14.502486 635 log.go:172] (0xc000762aa0) (1) Data frame handling\nI0719 23:52:14.502505 635 log.go:172] (0xc000762aa0) (1) Data frame sent\nI0719 23:52:14.502539 635 log.go:172] (0xc00096e840) (0xc000762aa0) Stream removed, broadcasting: 1\nI0719 23:52:14.502565 635 log.go:172] (0xc00096e840) Go away received\nI0719 23:52:14.502979 635 log.go:172] (0xc00096e840) (0xc000762aa0) Stream removed, broadcasting: 1\nI0719 23:52:14.503010 635 log.go:172] (0xc00096e840) (0xc00003a320) Stream removed, broadcasting: 3\nI0719 23:52:14.503029 635 log.go:172] (0xc00096e840) (0xc0007620a0) Stream removed, broadcasting: 5\n" Jul 19 23:52:14.507: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 19 23:52:14.507: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 19 23:52:14.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2478 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 19 23:52:14.897: INFO: stderr: "I0719 23:52:14.825416 649 log.go:172] (0xc000a20630) (0xc00066ca00) Create stream\nI0719 23:52:14.825480 649 log.go:172] (0xc000a20630) (0xc00066ca00) Stream added, broadcasting: 1\nI0719 23:52:14.829022 649 log.go:172] (0xc000a20630) Reply frame received for 1\nI0719 23:52:14.829081 649 log.go:172] (0xc000a20630) (0xc00066c280) Create stream\nI0719 23:52:14.829094 649 log.go:172] (0xc000a20630) (0xc00066c280) Stream added, broadcasting: 3\nI0719 23:52:14.830084 649 log.go:172] (0xc000a20630) Reply frame received for 3\nI0719 23:52:14.830128 649 log.go:172] (0xc000a20630) (0xc000656000) Create stream\nI0719 23:52:14.830142 649 log.go:172] (0xc000a20630) (0xc000656000) Stream added, broadcasting: 5\nI0719 23:52:14.831014 649 log.go:172] (0xc000a20630) Reply frame received for 5\nI0719 23:52:14.891263 649 log.go:172] (0xc000a20630) Data frame received for 3\nI0719 23:52:14.891292 649 log.go:172] (0xc00066c280) (3) Data frame handling\nI0719 23:52:14.891308 649 log.go:172] (0xc00066c280) (3) Data frame sent\nI0719 23:52:14.891324 649 log.go:172] (0xc000a20630) Data frame received for 3\nI0719 23:52:14.891335 649 log.go:172] (0xc00066c280) (3) Data frame handling\nI0719 23:52:14.891741 649 log.go:172] (0xc000a20630) Data frame received for 5\nI0719 23:52:14.891753 649 log.go:172] (0xc000656000) (5) Data frame handling\nI0719 23:52:14.891759 649 log.go:172] (0xc000656000) (5) Data frame sent\nI0719 23:52:14.891766 649 log.go:172] (0xc000a20630) Data frame received for 5\nI0719 23:52:14.891770 649 log.go:172] (0xc000656000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0719 23:52:14.893264 649 log.go:172] (0xc000a20630) Data frame received for 1\nI0719 23:52:14.893275 649 log.go:172] (0xc00066ca00) (1) Data frame handling\nI0719 23:52:14.893283 649 log.go:172] (0xc00066ca00) (1) Data frame sent\nI0719 23:52:14.893407 649 log.go:172] (0xc000a20630) (0xc00066ca00) Stream removed, broadcasting: 1\nI0719 23:52:14.893479 649 log.go:172] (0xc000a20630) Go away received\nI0719 23:52:14.893643 649 log.go:172] (0xc000a20630) (0xc00066ca00) Stream removed, broadcasting: 1\nI0719 23:52:14.893655 649 log.go:172] (0xc000a20630) (0xc00066c280) Stream removed, broadcasting: 3\nI0719 23:52:14.893661 649 log.go:172] (0xc000a20630) (0xc000656000) Stream removed, broadcasting: 5\n" Jul 19 23:52:14.897: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 19 23:52:14.897: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 19 23:52:14.901: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 19 23:52:14.901: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 19 23:52:14.901: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 19 23:52:14.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2478 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 19 23:52:15.301: INFO: stderr: "I0719 23:52:15.121353 670 log.go:172] (0xc0007f2420) (0xc0008a8780) Create stream\nI0719 23:52:15.121431 670 log.go:172] (0xc0007f2420) (0xc0008a8780) Stream added, broadcasting: 1\nI0719 23:52:15.124132 670 log.go:172] (0xc0007f2420) Reply frame received for 1\nI0719 23:52:15.124194 670 log.go:172] (0xc0007f2420) (0xc00037a140) Create stream\nI0719 23:52:15.124212 670 log.go:172] (0xc0007f2420) (0xc00037a140) Stream added, broadcasting: 3\nI0719 23:52:15.125535 670 log.go:172] (0xc0007f2420) Reply frame received for 3\nI0719 23:52:15.125577 670 log.go:172] (0xc0007f2420) (0xc0008a8820) Create stream\nI0719 23:52:15.125589 670 log.go:172] (0xc0007f2420) (0xc0008a8820) Stream added, broadcasting: 5\nI0719 23:52:15.126627 670 log.go:172] (0xc0007f2420) Reply frame received for 5\nI0719 23:52:15.185875 670 log.go:172] (0xc0007f2420) Data frame received for 5\nI0719 23:52:15.185901 670 log.go:172] (0xc0008a8820) (5) Data frame handling\nI0719 23:52:15.185916 670 log.go:172] (0xc0008a8820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0719 23:52:15.294090 670 log.go:172] (0xc0007f2420) Data frame received for 3\nI0719 23:52:15.294119 670 log.go:172] (0xc00037a140) (3) Data frame handling\nI0719 23:52:15.294136 670 log.go:172] (0xc00037a140) (3) Data frame sent\nI0719 23:52:15.294146 670 log.go:172] (0xc0007f2420) Data frame received for 3\nI0719 23:52:15.294153 670 log.go:172] (0xc00037a140) (3) Data frame handling\nI0719 23:52:15.294356 670 log.go:172] (0xc0007f2420) Data frame received for 5\nI0719 23:52:15.294369 670 log.go:172] (0xc0008a8820) (5) Data frame handling\nI0719 23:52:15.296142 670 log.go:172] (0xc0007f2420) Data frame received for 1\nI0719 23:52:15.296172 670 log.go:172] (0xc0008a8780) (1) Data frame handling\nI0719 23:52:15.296196 670 log.go:172] (0xc0008a8780) (1) Data frame sent\nI0719 23:52:15.296216 670 log.go:172] (0xc0007f2420) (0xc0008a8780) Stream removed, broadcasting: 1\nI0719 23:52:15.296572 670 log.go:172] (0xc0007f2420) (0xc0008a8780) Stream removed, broadcasting: 1\nI0719 23:52:15.296606 670 log.go:172] (0xc0007f2420) (0xc00037a140) Stream removed, broadcasting: 3\nI0719 23:52:15.296627 670 log.go:172] (0xc0007f2420) (0xc0008a8820) Stream removed, broadcasting: 5\n" Jul 19 23:52:15.301: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 19 23:52:15.301: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 19 23:52:15.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2478 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 19 23:52:15.582: INFO: stderr: "I0719 23:52:15.420032 692 log.go:172] (0xc0006f2370) (0xc000776640) Create stream\nI0719 23:52:15.420086 692 log.go:172] (0xc0006f2370) (0xc000776640) Stream added, broadcasting: 1\nI0719 23:52:15.422198 692 log.go:172] (0xc0006f2370) Reply frame received for 1\nI0719 23:52:15.422237 692 log.go:172] (0xc0006f2370) (0xc00073c000) Create stream\nI0719 23:52:15.422253 692 log.go:172] (0xc0006f2370) (0xc00073c000) Stream added, broadcasting: 3\nI0719 23:52:15.423073 692 log.go:172] (0xc0006f2370) Reply frame received for 3\nI0719 23:52:15.423112 692 log.go:172] (0xc0006f2370) (0xc0004c4280) Create stream\nI0719 23:52:15.423125 692 log.go:172] (0xc0006f2370) (0xc0004c4280) Stream added, broadcasting: 5\nI0719 23:52:15.423941 692 log.go:172] (0xc0006f2370) Reply frame received for 5\nI0719 23:52:15.488867 692 log.go:172] (0xc0006f2370) Data frame received for 5\nI0719 23:52:15.488894 692 log.go:172] (0xc0004c4280) (5) Data frame handling\nI0719 23:52:15.488905 692 log.go:172] (0xc0004c4280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0719 23:52:15.574204 692 log.go:172] (0xc0006f2370) Data frame received for 3\nI0719 23:52:15.574261 692 log.go:172] (0xc00073c000) (3) Data frame handling\nI0719 23:52:15.574303 692 log.go:172] (0xc00073c000) (3) Data frame sent\nI0719 23:52:15.574834 692 log.go:172] (0xc0006f2370) Data frame received for 5\nI0719 23:52:15.574877 692 log.go:172] (0xc0004c4280) (5) Data frame handling\nI0719 23:52:15.574917 692 log.go:172] (0xc0006f2370) Data frame received for 3\nI0719 23:52:15.574946 692 log.go:172] (0xc00073c000) (3) Data frame handling\nI0719 23:52:15.577112 692 log.go:172] (0xc0006f2370) Data frame received for 1\nI0719 23:52:15.577148 692 log.go:172] (0xc000776640) (1) Data frame handling\nI0719 23:52:15.577178 692 log.go:172] (0xc000776640) (1) Data frame sent\nI0719 23:52:15.577198 692 log.go:172] (0xc0006f2370) (0xc000776640) Stream removed, broadcasting: 1\nI0719 23:52:15.577219 692 log.go:172] (0xc0006f2370) Go away received\nI0719 23:52:15.577770 692 log.go:172] (0xc0006f2370) (0xc000776640) Stream removed, broadcasting: 1\nI0719 23:52:15.577812 692 log.go:172] (0xc0006f2370) (0xc00073c000) Stream removed, broadcasting: 3\nI0719 23:52:15.577834 692 log.go:172] (0xc0006f2370) (0xc0004c4280) Stream removed, broadcasting: 5\n" Jul 19 23:52:15.582: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 19 23:52:15.582: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 19 23:52:15.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2478 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 19 23:52:16.154: INFO: stderr: "I0719 23:52:16.045300 708 log.go:172] (0xc00091a580) (0xc00058cb40) Create stream\nI0719 23:52:16.045368 708 log.go:172] (0xc00091a580) (0xc00058cb40) Stream added, broadcasting: 1\nI0719 23:52:16.049711 708 log.go:172] (0xc00091a580) Reply frame received for 1\nI0719 23:52:16.049753 708 log.go:172] (0xc00091a580) (0xc00058c280) Create stream\nI0719 23:52:16.049763 708 log.go:172] (0xc00091a580) (0xc00058c280) Stream added, broadcasting: 3\nI0719 23:52:16.050726 708 log.go:172] (0xc00091a580) Reply frame received for 3\nI0719 23:52:16.050803 708 log.go:172] (0xc00091a580) (0xc00010a000) Create stream\nI0719 23:52:16.050833 708 log.go:172] (0xc00091a580) (0xc00010a000) Stream added, broadcasting: 5\nI0719 23:52:16.051636 708 log.go:172] (0xc00091a580) Reply frame received for 5\nI0719 23:52:16.110996 708 log.go:172] (0xc00091a580) Data frame received for 5\nI0719 23:52:16.111050 708 log.go:172] (0xc00010a000) (5) Data frame handling\nI0719 23:52:16.111079 708 log.go:172] (0xc00010a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0719 23:52:16.144153 708 log.go:172] (0xc00091a580) Data frame received for 3\nI0719 23:52:16.144172 708 log.go:172] (0xc00058c280) (3) Data frame handling\nI0719 23:52:16.144194 708 log.go:172] (0xc00058c280) (3) Data frame sent\nI0719 23:52:16.144435 708 log.go:172] (0xc00091a580) Data frame received for 3\nI0719 23:52:16.144470 708 log.go:172] (0xc00058c280) (3) Data frame handling\nI0719 23:52:16.144502 708 log.go:172] (0xc00091a580) Data frame received for 5\nI0719 23:52:16.144526 708 log.go:172] (0xc00010a000) (5) Data frame handling\nI0719 23:52:16.149542 708 log.go:172] (0xc00091a580) Data frame received for 1\nI0719 23:52:16.149568 708 log.go:172] (0xc00058cb40) (1) Data frame handling\nI0719 23:52:16.149583 708 log.go:172] (0xc00058cb40) (1) Data frame sent\nI0719 23:52:16.149607 708 log.go:172] (0xc00091a580) (0xc00058cb40) Stream removed, broadcasting: 1\nI0719 23:52:16.149933 708 log.go:172] (0xc00091a580) (0xc00058cb40) Stream removed, broadcasting: 1\nI0719 23:52:16.149948 708 log.go:172] (0xc00091a580) (0xc00058c280) Stream removed, broadcasting: 3\nI0719 23:52:16.149955 708 log.go:172] (0xc00091a580) (0xc00010a000) Stream removed, broadcasting: 5\n" Jul 19 23:52:16.154: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 19 23:52:16.154: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 19 23:52:16.154: INFO: Waiting for statefulset status.replicas updated to 0 Jul 19 23:52:16.158: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 19 23:52:26.166: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 19 23:52:26.166: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 19 23:52:26.166: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 19 23:52:26.193: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 23:52:26.193: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:43 +0000 UTC }] Jul 19 23:52:26.193: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:26.194: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:26.194: INFO: Jul 19 23:52:26.194: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 23:52:27.490: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 23:52:27.490: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:43 +0000 UTC }] Jul 19 23:52:27.490: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:27.490: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:27.490: INFO: Jul 19 23:52:27.490: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 23:52:28.495: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 23:52:28.495: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:43 +0000 UTC }] Jul 19 23:52:28.495: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:28.495: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:28.495: INFO: Jul 19 23:52:28.495: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 23:52:29.501: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 23:52:29.501: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:51:43 +0000 UTC }] Jul 19 23:52:29.501: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:29.501: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:29.501: INFO: Jul 19 23:52:29.501: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 19 23:52:30.505: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 23:52:30.505: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:30.505: INFO: Jul 19 23:52:30.505: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 19 23:52:31.509: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 23:52:31.509: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:31.509: INFO: Jul 19 23:52:31.509: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 19 23:52:32.513: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 23:52:32.514: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:32.514: INFO: Jul 19 23:52:32.514: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 19 23:52:33.518: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 23:52:33.518: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:33.518: INFO: Jul 19 23:52:33.518: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 19 23:52:34.522: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 23:52:34.522: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:34.522: INFO: Jul 19 23:52:34.522: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 19 23:52:35.527: INFO: POD NODE PHASE GRACE CONDITIONS Jul 19 23:52:35.527: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-19 23:52:03 +0000 UTC }] Jul 19 23:52:35.527: INFO: Jul 19 23:52:35.527: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2478 Jul 19 23:52:36.531: INFO: Scaling statefulset ss to 0 Jul 19 23:52:36.540: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jul 19 23:52:36.542: INFO: Deleting all statefulset in ns statefulset-2478 Jul 19 23:52:36.545: INFO: Scaling statefulset ss to 0 Jul 19 23:52:36.553: INFO: Waiting for statefulset status.replicas updated to 0 Jul 19 23:52:36.556: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:52:36.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2478" for this suite. Jul 19 23:52:42.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:52:42.690: INFO: namespace statefulset-2478 deletion completed in 6.113728757s • [SLOW TEST:59.319 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:52:42.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jul 19 23:52:42.771: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:52:42.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3493" for this suite. Jul 19 23:52:48.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:52:49.030: INFO: namespace kubectl-3493 deletion completed in 6.103092714s • [SLOW TEST:6.340 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:52:49.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-1875 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1875 to expose endpoints map[] Jul 19 23:52:49.209: INFO: Get endpoints failed (9.356677ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jul 19 23:52:50.213: INFO: successfully validated that service endpoint-test2 in namespace services-1875 exposes endpoints map[] (1.013335302s elapsed) STEP: Creating pod pod1 in namespace services-1875 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1875 to expose endpoints map[pod1:[80]] Jul 19 23:52:54.332: INFO: successfully validated that service endpoint-test2 in namespace services-1875 exposes endpoints map[pod1:[80]] (4.111057696s elapsed) STEP: Creating pod pod2 in namespace services-1875 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1875 to expose endpoints map[pod1:[80] pod2:[80]] Jul 19 23:52:58.433: INFO: successfully validated that service endpoint-test2 in namespace services-1875 exposes endpoints map[pod1:[80] pod2:[80]] (4.096908909s elapsed) STEP: Deleting pod pod1 in namespace services-1875 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1875 to expose endpoints map[pod2:[80]] Jul 19 23:52:59.473: INFO: successfully validated that service endpoint-test2 in namespace services-1875 exposes endpoints map[pod2:[80]] (1.037030726s elapsed) STEP: Deleting pod pod2 in namespace services-1875 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1875 to expose endpoints map[] Jul 19 23:53:00.519: INFO: successfully validated that service endpoint-test2 in namespace services-1875 exposes endpoints map[] (1.041862884s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:53:00.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1875" for this suite. Jul 19 23:53:06.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:53:06.739: INFO: namespace services-1875 deletion completed in 6.131432468s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:17.708 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:53:06.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 19 23:53:06.949: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:53:11.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9204" for this suite. Jul 19 23:53:49.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:53:49.099: INFO: namespace pods-9204 deletion completed in 38.087441151s • [SLOW TEST:42.360 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:53:49.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jul 19 23:53:49.147: INFO: Waiting up to 5m0s for pod "client-containers-6442bedb-69e9-47b7-8ad4-653d9efdf47f" in namespace "containers-6754" to be "success or failure" Jul 19 23:53:49.203: INFO: Pod "client-containers-6442bedb-69e9-47b7-8ad4-653d9efdf47f": Phase="Pending", Reason="", readiness=false. Elapsed: 55.784428ms Jul 19 23:53:51.207: INFO: Pod "client-containers-6442bedb-69e9-47b7-8ad4-653d9efdf47f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060259978s Jul 19 23:53:53.221: INFO: Pod "client-containers-6442bedb-69e9-47b7-8ad4-653d9efdf47f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074164299s STEP: Saw pod success Jul 19 23:53:53.221: INFO: Pod "client-containers-6442bedb-69e9-47b7-8ad4-653d9efdf47f" satisfied condition "success or failure" Jul 19 23:53:53.224: INFO: Trying to get logs from node iruya-worker pod client-containers-6442bedb-69e9-47b7-8ad4-653d9efdf47f container test-container: STEP: delete the pod Jul 19 23:53:53.255: INFO: Waiting for pod client-containers-6442bedb-69e9-47b7-8ad4-653d9efdf47f to disappear Jul 19 23:53:53.407: INFO: Pod client-containers-6442bedb-69e9-47b7-8ad4-653d9efdf47f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:53:53.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6754" for this suite. Jul 19 23:53:59.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:53:59.509: INFO: namespace containers-6754 deletion completed in 6.097538401s • [SLOW TEST:10.409 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:53:59.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-pg9b STEP: Creating a pod to test atomic-volume-subpath Jul 19 23:53:59.607: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pg9b" in namespace "subpath-6826" to be "success or failure" Jul 19 23:53:59.625: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.113836ms Jul 19 23:54:01.630: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022452224s Jul 19 23:54:03.634: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.026831876s Jul 19 23:54:05.638: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Running", Reason="", readiness=true. Elapsed: 6.031017806s Jul 19 23:54:07.642: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Running", Reason="", readiness=true. Elapsed: 8.034743429s Jul 19 23:54:09.645: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Running", Reason="", readiness=true. Elapsed: 10.038396347s Jul 19 23:54:11.650: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Running", Reason="", readiness=true. Elapsed: 12.042979531s Jul 19 23:54:13.654: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Running", Reason="", readiness=true. Elapsed: 14.046835184s Jul 19 23:54:15.658: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Running", Reason="", readiness=true. Elapsed: 16.050982369s Jul 19 23:54:17.662: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Running", Reason="", readiness=true. Elapsed: 18.054687556s Jul 19 23:54:19.666: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Running", Reason="", readiness=true. Elapsed: 20.059059476s Jul 19 23:54:21.670: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Running", Reason="", readiness=true. Elapsed: 22.063304975s Jul 19 23:54:23.675: INFO: Pod "pod-subpath-test-secret-pg9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.067561019s STEP: Saw pod success Jul 19 23:54:23.675: INFO: Pod "pod-subpath-test-secret-pg9b" satisfied condition "success or failure" Jul 19 23:54:23.678: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-pg9b container test-container-subpath-secret-pg9b: STEP: delete the pod Jul 19 23:54:23.803: INFO: Waiting for pod pod-subpath-test-secret-pg9b to disappear Jul 19 23:54:23.970: INFO: Pod pod-subpath-test-secret-pg9b no longer exists STEP: Deleting pod pod-subpath-test-secret-pg9b Jul 19 23:54:23.970: INFO: Deleting pod "pod-subpath-test-secret-pg9b" in namespace "subpath-6826" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:54:23.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6826" for this suite. Jul 19 23:54:29.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:54:30.093: INFO: namespace subpath-6826 deletion completed in 6.117343986s • [SLOW TEST:30.584 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:54:30.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1046.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1046.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1046.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1046.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1046.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1046.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 19 23:54:36.202: INFO: DNS probes using dns-1046/dns-test-5c9eea0b-2bf7-4c8d-a768-117564c28e3a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:54:36.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1046" for this suite. Jul 19 23:54:42.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:54:42.391: INFO: namespace dns-1046 deletion completed in 6.142313936s • [SLOW TEST:12.297 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:54:42.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:54:42.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8833" for this suite. Jul 19 23:55:04.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:55:04.652: INFO: namespace pods-8833 deletion completed in 22.151889792s • [SLOW TEST:22.260 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:55:04.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-c0a10abd-5855-4660-916f-d20b6a19e2a0 STEP: Creating a pod to test consume configMaps Jul 19 23:55:04.763: INFO: Waiting up to 5m0s for pod "pod-configmaps-80350c6b-8580-4414-84ef-3e10e6ec1984" in namespace "configmap-9433" to be "success or failure" Jul 19 23:55:04.778: INFO: Pod "pod-configmaps-80350c6b-8580-4414-84ef-3e10e6ec1984": Phase="Pending", Reason="", readiness=false. Elapsed: 14.723392ms Jul 19 23:55:06.863: INFO: Pod "pod-configmaps-80350c6b-8580-4414-84ef-3e10e6ec1984": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100262314s Jul 19 23:55:08.868: INFO: Pod "pod-configmaps-80350c6b-8580-4414-84ef-3e10e6ec1984": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10469338s STEP: Saw pod success Jul 19 23:55:08.868: INFO: Pod "pod-configmaps-80350c6b-8580-4414-84ef-3e10e6ec1984" satisfied condition "success or failure" Jul 19 23:55:08.871: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-80350c6b-8580-4414-84ef-3e10e6ec1984 container configmap-volume-test: STEP: delete the pod Jul 19 23:55:08.911: INFO: Waiting for pod pod-configmaps-80350c6b-8580-4414-84ef-3e10e6ec1984 to disappear Jul 19 23:55:08.921: INFO: Pod pod-configmaps-80350c6b-8580-4414-84ef-3e10e6ec1984 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:55:08.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9433" for this suite. Jul 19 23:55:14.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:55:15.032: INFO: namespace configmap-9433 deletion completed in 6.10781145s • [SLOW TEST:10.381 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:55:15.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jul 19 23:55:15.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6624' Jul 19 23:55:17.999: INFO: stderr: "" Jul 19 23:55:17.999: INFO: stdout: "pod/pause created\n" Jul 19 23:55:17.999: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 19 23:55:17.999: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6624" to be "running and ready" Jul 19 23:55:18.037: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 37.280521ms Jul 19 23:55:20.041: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041456107s Jul 19 23:55:22.045: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.045197305s Jul 19 23:55:22.045: INFO: Pod "pause" satisfied condition "running and ready" Jul 19 23:55:22.045: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jul 19 23:55:22.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6624' Jul 19 23:55:22.138: INFO: stderr: "" Jul 19 23:55:22.138: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 19 23:55:22.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6624' Jul 19 23:55:22.221: INFO: stderr: "" Jul 19 23:55:22.221: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 19 23:55:22.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6624' Jul 19 23:55:22.311: INFO: stderr: "" Jul 19 23:55:22.311: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 19 23:55:22.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6624' Jul 19 23:55:22.409: INFO: stderr: "" Jul 19 23:55:22.409: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jul 19 23:55:22.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6624' Jul 19 23:55:22.523: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 19 23:55:22.523: INFO: stdout: "pod \"pause\" force deleted\n" Jul 19 23:55:22.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6624' Jul 19 23:55:22.613: INFO: stderr: "No resources found.\n" Jul 19 23:55:22.613: INFO: stdout: "" Jul 19 23:55:22.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6624 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 19 23:55:22.809: INFO: stderr: "" Jul 19 23:55:22.810: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:55:22.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6624" for this suite. Jul 19 23:55:29.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:55:29.089: INFO: namespace kubectl-6624 deletion completed in 6.23161396s • [SLOW TEST:14.057 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:55:29.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jul 19 23:55:29.196: INFO: PodSpec: initContainers in spec.initContainers Jul 19 23:56:20.243: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ee5fa862-fc7c-4f04-a0a1-4e2b4279a3fe", GenerateName:"", Namespace:"init-container-6373", SelfLink:"/api/v1/namespaces/init-container-6373/pods/pod-init-ee5fa862-fc7c-4f04-a0a1-4e2b4279a3fe", UID:"2d8ffd88-0d54-41aa-b2a9-d83bc92c11e9", ResourceVersion:"38674", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730799729, loc:(*time.Location)(0x7eb18c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"196714702"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7zsrg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0023f4200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7zsrg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7zsrg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7zsrg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002bd4288), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0027be060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002bd4310)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002bd4330)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002bd4338), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002bd433c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730799729, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730799729, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730799729, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730799729, loc:(*time.Location)(0x7eb18c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.1.26", StartTime:(*v1.Time)(0xc001db6180), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001db61c0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002632150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://97850ea342a1e7dc5757ec88d124aa0d5c9203a5a7507bf74bc20a58fa50da4b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001db61e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001db61a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:56:20.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6373" for this suite. Jul 19 23:56:44.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:56:44.356: INFO: namespace init-container-6373 deletion completed in 24.0755083s • [SLOW TEST:75.267 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:56:44.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 19 23:56:44.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7178' Jul 19 23:56:44.617: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 19 23:56:44.617: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jul 19 23:56:44.648: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-qgtbv] Jul 19 23:56:44.649: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-qgtbv" in namespace "kubectl-7178" to be "running and ready" Jul 19 23:56:44.665: INFO: Pod "e2e-test-nginx-rc-qgtbv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.396447ms Jul 19 23:56:46.669: INFO: Pod "e2e-test-nginx-rc-qgtbv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020337078s Jul 19 23:56:48.707: INFO: Pod "e2e-test-nginx-rc-qgtbv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058874811s Jul 19 23:56:50.712: INFO: Pod "e2e-test-nginx-rc-qgtbv": Phase="Running", Reason="", readiness=true. Elapsed: 6.063049356s Jul 19 23:56:50.712: INFO: Pod "e2e-test-nginx-rc-qgtbv" satisfied condition "running and ready" Jul 19 23:56:50.712: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-qgtbv] Jul 19 23:56:50.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7178' Jul 19 23:56:50.829: INFO: stderr: "" Jul 19 23:56:50.829: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jul 19 23:56:50.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7178' Jul 19 23:56:50.927: INFO: stderr: "" Jul 19 23:56:50.927: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:56:50.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7178" for this suite. Jul 19 23:56:56.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:56:57.027: INFO: namespace kubectl-7178 deletion completed in 6.096768401s • [SLOW TEST:12.671 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:56:57.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-hs6w STEP: Creating a pod to test atomic-volume-subpath Jul 19 23:56:57.171: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hs6w" in namespace "subpath-9735" to be "success or failure" Jul 19 23:56:57.186: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Pending", Reason="", readiness=false. Elapsed: 15.008881ms Jul 19 23:56:59.332: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160349091s Jul 19 23:57:01.353: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Running", Reason="", readiness=true. Elapsed: 4.18196359s Jul 19 23:57:03.356: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Running", Reason="", readiness=true. Elapsed: 6.185168067s Jul 19 23:57:05.361: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Running", Reason="", readiness=true. Elapsed: 8.189944614s Jul 19 23:57:07.368: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Running", Reason="", readiness=true. Elapsed: 10.196597099s Jul 19 23:57:09.373: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Running", Reason="", readiness=true. Elapsed: 12.201906986s Jul 19 23:57:11.377: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Running", Reason="", readiness=true. Elapsed: 14.205575868s Jul 19 23:57:13.381: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Running", Reason="", readiness=true. Elapsed: 16.20930456s Jul 19 23:57:15.384: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Running", Reason="", readiness=true. Elapsed: 18.213118259s Jul 19 23:57:17.389: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Running", Reason="", readiness=true. Elapsed: 20.217670656s Jul 19 23:57:19.393: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Running", Reason="", readiness=true. Elapsed: 22.221893907s Jul 19 23:57:21.397: INFO: Pod "pod-subpath-test-downwardapi-hs6w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.225973117s STEP: Saw pod success Jul 19 23:57:21.397: INFO: Pod "pod-subpath-test-downwardapi-hs6w" satisfied condition "success or failure" Jul 19 23:57:21.400: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-hs6w container test-container-subpath-downwardapi-hs6w: STEP: delete the pod Jul 19 23:57:21.485: INFO: Waiting for pod pod-subpath-test-downwardapi-hs6w to disappear Jul 19 23:57:21.490: INFO: Pod pod-subpath-test-downwardapi-hs6w no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-hs6w Jul 19 23:57:21.490: INFO: Deleting pod "pod-subpath-test-downwardapi-hs6w" in namespace "subpath-9735" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:57:21.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9735" for this suite. Jul 19 23:57:27.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:57:27.799: INFO: namespace subpath-9735 deletion completed in 6.121602411s • [SLOW TEST:30.771 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:57:27.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-75df5b4d-8dd1-4c2f-8a54-49de436d204f in namespace container-probe-7902 Jul 19 23:57:35.153: INFO: Started pod busybox-75df5b4d-8dd1-4c2f-8a54-49de436d204f in namespace container-probe-7902 STEP: checking the pod's current state and verifying that restartCount is present Jul 19 23:57:35.157: INFO: Initial restart count of pod busybox-75df5b4d-8dd1-4c2f-8a54-49de436d204f is 0 Jul 19 23:58:27.792: INFO: Restart count of pod container-probe-7902/busybox-75df5b4d-8dd1-4c2f-8a54-49de436d204f is now 1 (52.635349208s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:58:27.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7902" for this suite. Jul 19 23:58:33.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:58:34.041: INFO: namespace container-probe-7902 deletion completed in 6.200665674s • [SLOW TEST:66.242 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:58:34.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-1602e72d-e706-4527-aab9-db882197c7c1 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-1602e72d-e706-4527-aab9-db882197c7c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:58:40.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6332" for this suite. Jul 19 23:59:02.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:59:02.320: INFO: namespace configmap-6332 deletion completed in 22.088063546s • [SLOW TEST:28.279 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:59:02.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 19 23:59:02.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9749" for this suite. Jul 19 23:59:08.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 19 23:59:08.500: INFO: namespace services-9749 deletion completed in 6.093549303s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.179 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 19 23:59:08.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-353 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-353 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-353 Jul 19 23:59:08.602: INFO: Found 0 stateful pods, waiting for 1 Jul 19 23:59:18.607: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jul 19 23:59:18.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-353 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 19 23:59:18.858: INFO: stderr: "I0719 23:59:18.751386 979 log.go:172] (0xc0009d2420) (0xc00055a6e0) Create stream\nI0719 23:59:18.751454 979 log.go:172] (0xc0009d2420) (0xc00055a6e0) Stream added, broadcasting: 1\nI0719 23:59:18.754337 979 log.go:172] (0xc0009d2420) Reply frame received for 1\nI0719 23:59:18.754469 979 log.go:172] (0xc0009d2420) (0xc000966000) Create stream\nI0719 23:59:18.754541 979 log.go:172] (0xc0009d2420) (0xc000966000) Stream added, broadcasting: 3\nI0719 23:59:18.755993 979 log.go:172] (0xc0009d2420) Reply frame received for 3\nI0719 23:59:18.756031 979 log.go:172] (0xc0009d2420) (0xc00055a000) Create stream\nI0719 23:59:18.756039 979 log.go:172] (0xc0009d2420) (0xc00055a000) Stream added, broadcasting: 5\nI0719 23:59:18.757015 979 log.go:172] (0xc0009d2420) Reply frame received for 5\nI0719 23:59:18.820640 979 log.go:172] (0xc0009d2420) Data frame received for 5\nI0719 23:59:18.820665 979 log.go:172] (0xc00055a000) (5) Data frame handling\nI0719 23:59:18.820681 979 log.go:172] (0xc00055a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0719 23:59:18.849097 979 log.go:172] (0xc0009d2420) Data frame received for 3\nI0719 23:59:18.849135 979 log.go:172] (0xc000966000) (3) Data frame handling\nI0719 23:59:18.849175 979 log.go:172] (0xc000966000) (3) Data frame sent\nI0719 23:59:18.849194 979 log.go:172] (0xc0009d2420) Data frame received for 3\nI0719 23:59:18.849206 979 log.go:172] (0xc000966000) (3) Data frame handling\nI0719 23:59:18.849627 979 log.go:172] (0xc0009d2420) Data frame received for 5\nI0719 23:59:18.849659 979 log.go:172] (0xc00055a000) (5) Data frame handling\nI0719 23:59:18.851857 979 log.go:172] (0xc0009d2420) Data frame received for 1\nI0719 23:59:18.851902 979 log.go:172] (0xc00055a6e0) (1) Data frame handling\nI0719 23:59:18.851930 979 log.go:172] (0xc00055a6e0) (1) Data frame sent\nI0719 23:59:18.851957 979 log.go:172] (0xc0009d2420) (0xc00055a6e0) Stream removed, broadcasting: 1\nI0719 23:59:18.852069 979 log.go:172] (0xc0009d2420) Go away received\nI0719 23:59:18.852550 979 log.go:172] (0xc0009d2420) (0xc00055a6e0) Stream removed, broadcasting: 1\nI0719 23:59:18.852587 979 log.go:172] (0xc0009d2420) (0xc000966000) Stream removed, broadcasting: 3\nI0719 23:59:18.852611 979 log.go:172] (0xc0009d2420) (0xc00055a000) Stream removed, broadcasting: 5\n" Jul 19 23:59:18.858: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 19 23:59:18.858: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 19 23:59:18.862: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 19 23:59:29.082: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 19 23:59:29.082: INFO: Waiting for statefulset status.replicas updated to 0 Jul 19 23:59:29.093: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999947s Jul 19 23:59:30.098: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996625342s Jul 19 23:59:31.101: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991696851s Jul 19 23:59:32.106: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.988078903s Jul 19 23:59:33.111: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.983063481s Jul 19 23:59:34.116: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.978254219s Jul 19 23:59:35.125: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.973470208s Jul 19 23:59:36.145: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.96478398s Jul 19 23:59:37.382: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.944565716s Jul 19 23:59:38.416: INFO: Verifying statefulset ss doesn't scale past 1 for another 707.180861ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-353 Jul 19 23:59:39.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 19 23:59:39.926: INFO: stderr: "I0719 23:59:39.841427 1001 log.go:172] (0xc000984630) (0xc00033ebe0) Create stream\nI0719 23:59:39.841487 1001 log.go:172] (0xc000984630) (0xc00033ebe0) Stream added, broadcasting: 1\nI0719 23:59:39.844538 1001 log.go:172] (0xc000984630) Reply frame received for 1\nI0719 23:59:39.844579 1001 log.go:172] (0xc000984630) (0xc00033e320) Create stream\nI0719 23:59:39.844592 1001 log.go:172] (0xc000984630) (0xc00033e320) Stream added, broadcasting: 3\nI0719 23:59:39.845493 1001 log.go:172] (0xc000984630) Reply frame received for 3\nI0719 23:59:39.845522 1001 log.go:172] (0xc000984630) (0xc00033e3c0) Create stream\nI0719 23:59:39.845531 1001 log.go:172] (0xc000984630) (0xc00033e3c0) Stream added, broadcasting: 5\nI0719 23:59:39.846312 1001 log.go:172] (0xc000984630) Reply frame received for 5\nI0719 23:59:39.919363 1001 log.go:172] (0xc000984630) Data frame received for 3\nI0719 23:59:39.919395 1001 log.go:172] (0xc00033e320) (3) Data frame handling\nI0719 23:59:39.919410 1001 log.go:172] (0xc00033e320) (3) Data frame sent\nI0719 23:59:39.919419 1001 log.go:172] (0xc000984630) Data frame received for 3\nI0719 23:59:39.919425 1001 log.go:172] (0xc00033e320) (3) Data frame handling\nI0719 23:59:39.919460 1001 log.go:172] (0xc000984630) Data frame received for 5\nI0719 23:59:39.919512 1001 log.go:172] (0xc00033e3c0) (5) Data frame handling\nI0719 23:59:39.919565 1001 log.go:172] (0xc00033e3c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0719 23:59:39.919595 1001 log.go:172] (0xc000984630) Data frame received for 5\nI0719 23:59:39.919727 1001 log.go:172] (0xc00033e3c0) (5) Data frame handling\nI0719 23:59:39.921263 1001 log.go:172] (0xc000984630) Data frame received for 1\nI0719 23:59:39.921293 1001 log.go:172] (0xc00033ebe0) (1) Data frame handling\nI0719 23:59:39.921334 1001 log.go:172] (0xc00033ebe0) (1) Data frame sent\nI0719 23:59:39.921362 1001 log.go:172] (0xc000984630) (0xc00033ebe0) Stream removed, broadcasting: 1\nI0719 23:59:39.921383 1001 log.go:172] (0xc000984630) Go away received\nI0719 23:59:39.921887 1001 log.go:172] (0xc000984630) (0xc00033ebe0) Stream removed, broadcasting: 1\nI0719 23:59:39.921910 1001 log.go:172] (0xc000984630) (0xc00033e320) Stream removed, broadcasting: 3\nI0719 23:59:39.921921 1001 log.go:172] (0xc000984630) (0xc00033e3c0) Stream removed, broadcasting: 5\n" Jul 19 23:59:39.926: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 19 23:59:39.926: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 19 23:59:39.932: INFO: Found 1 stateful pods, waiting for 3 Jul 19 23:59:50.053: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 19 23:59:50.053: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 19 23:59:50.053: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 19 23:59:59.937: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 19 23:59:59.937: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 19 23:59:59.937: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jul 19 23:59:59.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-353 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 20 00:00:00.389: INFO: stderr: "I0720 00:00:00.067665 1021 log.go:172] (0xc0009a4420) (0xc0009726e0) Create stream\nI0720 00:00:00.067735 1021 log.go:172] (0xc0009a4420) (0xc0009726e0) Stream added, broadcasting: 1\nI0720 00:00:00.069975 1021 log.go:172] (0xc0009a4420) Reply frame received for 1\nI0720 00:00:00.070035 1021 log.go:172] (0xc0009a4420) (0xc00079c320) Create stream\nI0720 00:00:00.070071 1021 log.go:172] (0xc0009a4420) (0xc00079c320) Stream added, broadcasting: 3\nI0720 00:00:00.070937 1021 log.go:172] (0xc0009a4420) Reply frame received for 3\nI0720 00:00:00.070968 1021 log.go:172] (0xc0009a4420) (0xc000866000) Create stream\nI0720 00:00:00.070978 1021 log.go:172] (0xc0009a4420) (0xc000866000) Stream added, broadcasting: 5\nI0720 00:00:00.071799 1021 log.go:172] (0xc0009a4420) Reply frame received for 5\nI0720 00:00:00.148865 1021 log.go:172] (0xc0009a4420) Data frame received for 5\nI0720 00:00:00.148890 1021 log.go:172] (0xc000866000) (5) Data frame handling\nI0720 00:00:00.148902 1021 log.go:172] (0xc000866000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0720 00:00:00.381482 1021 log.go:172] (0xc0009a4420) Data frame received for 3\nI0720 00:00:00.381525 1021 log.go:172] (0xc00079c320) (3) Data frame handling\nI0720 00:00:00.381545 1021 log.go:172] (0xc00079c320) (3) Data frame sent\nI0720 00:00:00.381722 1021 log.go:172] (0xc0009a4420) Data frame received for 5\nI0720 00:00:00.381747 1021 log.go:172] (0xc000866000) (5) Data frame handling\nI0720 00:00:00.381770 1021 log.go:172] (0xc0009a4420) Data frame received for 3\nI0720 00:00:00.381793 1021 log.go:172] (0xc00079c320) (3) Data frame handling\nI0720 00:00:00.383273 1021 log.go:172] (0xc0009a4420) Data frame received for 1\nI0720 00:00:00.383291 1021 log.go:172] (0xc0009726e0) (1) Data frame handling\nI0720 00:00:00.383313 1021 log.go:172] (0xc0009726e0) (1) Data frame sent\nI0720 00:00:00.383325 1021 log.go:172] (0xc0009a4420) (0xc0009726e0) Stream removed, broadcasting: 1\nI0720 00:00:00.383377 1021 log.go:172] (0xc0009a4420) Go away received\nI0720 00:00:00.383703 1021 log.go:172] (0xc0009a4420) (0xc0009726e0) Stream removed, broadcasting: 1\nI0720 00:00:00.383719 1021 log.go:172] (0xc0009a4420) (0xc00079c320) Stream removed, broadcasting: 3\nI0720 00:00:00.383727 1021 log.go:172] (0xc0009a4420) (0xc000866000) Stream removed, broadcasting: 5\n" Jul 20 00:00:00.389: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 20 00:00:00.389: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 20 00:00:00.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-353 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 20 00:00:01.044: INFO: stderr: "I0720 00:00:00.671129 1043 log.go:172] (0xc0006e0a50) (0xc00059c820) Create stream\nI0720 00:00:00.671213 1043 log.go:172] (0xc0006e0a50) (0xc00059c820) Stream added, broadcasting: 1\nI0720 00:00:00.674884 1043 log.go:172] (0xc0006e0a50) Reply frame received for 1\nI0720 00:00:00.674913 1043 log.go:172] (0xc0006e0a50) (0xc00059c000) Create stream\nI0720 00:00:00.674921 1043 log.go:172] (0xc0006e0a50) (0xc00059c000) Stream added, broadcasting: 3\nI0720 00:00:00.675817 1043 log.go:172] (0xc0006e0a50) Reply frame received for 3\nI0720 00:00:00.675861 1043 log.go:172] (0xc0006e0a50) (0xc00039c320) Create stream\nI0720 00:00:00.675872 1043 log.go:172] (0xc0006e0a50) (0xc00039c320) Stream added, broadcasting: 5\nI0720 00:00:00.676815 1043 log.go:172] (0xc0006e0a50) Reply frame received for 5\nI0720 00:00:00.737044 1043 log.go:172] (0xc0006e0a50) Data frame received for 5\nI0720 00:00:00.737067 1043 log.go:172] (0xc00039c320) (5) Data frame handling\nI0720 00:00:00.737080 1043 log.go:172] (0xc00039c320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0720 00:00:01.036696 1043 log.go:172] (0xc0006e0a50) Data frame received for 3\nI0720 00:00:01.036844 1043 log.go:172] (0xc00059c000) (3) Data frame handling\nI0720 00:00:01.036870 1043 log.go:172] (0xc00059c000) (3) Data frame sent\nI0720 00:00:01.037120 1043 log.go:172] (0xc0006e0a50) Data frame received for 5\nI0720 00:00:01.037146 1043 log.go:172] (0xc00039c320) (5) Data frame handling\nI0720 00:00:01.037566 1043 log.go:172] (0xc0006e0a50) Data frame received for 3\nI0720 00:00:01.037600 1043 log.go:172] (0xc00059c000) (3) Data frame handling\nI0720 00:00:01.039493 1043 log.go:172] (0xc0006e0a50) Data frame received for 1\nI0720 00:00:01.039524 1043 log.go:172] (0xc00059c820) (1) Data frame handling\nI0720 00:00:01.039543 1043 log.go:172] (0xc00059c820) (1) Data frame sent\nI0720 00:00:01.039569 1043 log.go:172] (0xc0006e0a50) (0xc00059c820) Stream removed, broadcasting: 1\nI0720 00:00:01.039599 1043 log.go:172] (0xc0006e0a50) Go away received\nI0720 00:00:01.040047 1043 log.go:172] (0xc0006e0a50) (0xc00059c820) Stream removed, broadcasting: 1\nI0720 00:00:01.040078 1043 log.go:172] (0xc0006e0a50) (0xc00059c000) Stream removed, broadcasting: 3\nI0720 00:00:01.040096 1043 log.go:172] (0xc0006e0a50) (0xc00039c320) Stream removed, broadcasting: 5\n" Jul 20 00:00:01.045: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 20 00:00:01.045: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 20 00:00:01.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-353 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 20 00:00:01.490: INFO: stderr: "I0720 00:00:01.341343 1063 log.go:172] (0xc000966420) (0xc000348780) Create stream\nI0720 00:00:01.341425 1063 log.go:172] (0xc000966420) (0xc000348780) Stream added, broadcasting: 1\nI0720 00:00:01.345381 1063 log.go:172] (0xc000966420) Reply frame received for 1\nI0720 00:00:01.345436 1063 log.go:172] (0xc000966420) (0xc00073c000) Create stream\nI0720 00:00:01.345457 1063 log.go:172] (0xc000966420) (0xc00073c000) Stream added, broadcasting: 3\nI0720 00:00:01.346685 1063 log.go:172] (0xc000966420) Reply frame received for 3\nI0720 00:00:01.346713 1063 log.go:172] (0xc000966420) (0xc00073c0a0) Create stream\nI0720 00:00:01.346723 1063 log.go:172] (0xc000966420) (0xc00073c0a0) Stream added, broadcasting: 5\nI0720 00:00:01.347528 1063 log.go:172] (0xc000966420) Reply frame received for 5\nI0720 00:00:01.396932 1063 log.go:172] (0xc000966420) Data frame received for 5\nI0720 00:00:01.396967 1063 log.go:172] (0xc00073c0a0) (5) Data frame handling\nI0720 00:00:01.396983 1063 log.go:172] (0xc00073c0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0720 00:00:01.482576 1063 log.go:172] (0xc000966420) Data frame received for 5\nI0720 00:00:01.482624 1063 log.go:172] (0xc00073c0a0) (5) Data frame handling\nI0720 00:00:01.482645 1063 log.go:172] (0xc000966420) Data frame received for 3\nI0720 00:00:01.482651 1063 log.go:172] (0xc00073c000) (3) Data frame handling\nI0720 00:00:01.482664 1063 log.go:172] (0xc00073c000) (3) Data frame sent\nI0720 00:00:01.482685 1063 log.go:172] (0xc000966420) Data frame received for 3\nI0720 00:00:01.482693 1063 log.go:172] (0xc00073c000) (3) Data frame handling\nI0720 00:00:01.484273 1063 log.go:172] (0xc000966420) Data frame received for 1\nI0720 00:00:01.484293 1063 log.go:172] (0xc000348780) (1) Data frame handling\nI0720 00:00:01.484305 1063 log.go:172] (0xc000348780) (1) Data frame sent\nI0720 00:00:01.484319 1063 log.go:172] (0xc000966420) (0xc000348780) Stream removed, broadcasting: 1\nI0720 00:00:01.484625 1063 log.go:172] (0xc000966420) (0xc000348780) Stream removed, broadcasting: 1\nI0720 00:00:01.484646 1063 log.go:172] (0xc000966420) (0xc00073c000) Stream removed, broadcasting: 3\nI0720 00:00:01.484656 1063 log.go:172] (0xc000966420) (0xc00073c0a0) Stream removed, broadcasting: 5\n" Jul 20 00:00:01.490: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 20 00:00:01.490: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 20 00:00:01.490: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 00:00:01.493: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 20 00:00:11.501: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 20 00:00:11.501: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 20 00:00:11.501: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 20 00:00:11.512: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999496s Jul 20 00:00:12.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994265878s Jul 20 00:00:13.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.94412992s Jul 20 00:00:14.571: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.939496973s Jul 20 00:00:15.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.935393498s Jul 20 00:00:16.579: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.931438035s Jul 20 00:00:17.584: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.927548473s Jul 20 00:00:18.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.922952089s Jul 20 00:00:19.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.917797635s Jul 20 00:00:20.620: INFO: Verifying statefulset ss doesn't scale past 3 for another 891.187237ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-353 Jul 20 00:00:21.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 20 00:00:22.140: INFO: stderr: "I0720 00:00:22.062656 1085 log.go:172] (0xc0009c4420) (0xc00063e6e0) Create stream\nI0720 00:00:22.062708 1085 log.go:172] (0xc0009c4420) (0xc00063e6e0) Stream added, broadcasting: 1\nI0720 00:00:22.065727 1085 log.go:172] (0xc0009c4420) Reply frame received for 1\nI0720 00:00:22.065797 1085 log.go:172] (0xc0009c4420) (0xc00063e000) Create stream\nI0720 00:00:22.065822 1085 log.go:172] (0xc0009c4420) (0xc00063e000) Stream added, broadcasting: 3\nI0720 00:00:22.066870 1085 log.go:172] (0xc0009c4420) Reply frame received for 3\nI0720 00:00:22.066891 1085 log.go:172] (0xc0009c4420) (0xc00063e0a0) Create stream\nI0720 00:00:22.066897 1085 log.go:172] (0xc0009c4420) (0xc00063e0a0) Stream added, broadcasting: 5\nI0720 00:00:22.067633 1085 log.go:172] (0xc0009c4420) Reply frame received for 5\nI0720 00:00:22.134076 1085 log.go:172] (0xc0009c4420) Data frame received for 5\nI0720 00:00:22.134114 1085 log.go:172] (0xc00063e0a0) (5) Data frame handling\nI0720 00:00:22.134125 1085 log.go:172] (0xc00063e0a0) (5) Data frame sent\nI0720 00:00:22.134132 1085 log.go:172] (0xc0009c4420) Data frame received for 5\nI0720 00:00:22.134138 1085 log.go:172] (0xc00063e0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0720 00:00:22.134156 1085 log.go:172] (0xc0009c4420) Data frame received for 3\nI0720 00:00:22.134162 1085 log.go:172] (0xc00063e000) (3) Data frame handling\nI0720 00:00:22.134170 1085 log.go:172] (0xc00063e000) (3) Data frame sent\nI0720 00:00:22.134179 1085 log.go:172] (0xc0009c4420) Data frame received for 3\nI0720 00:00:22.134184 1085 log.go:172] (0xc00063e000) (3) Data frame handling\nI0720 00:00:22.135387 1085 log.go:172] (0xc0009c4420) Data frame received for 1\nI0720 00:00:22.135408 1085 log.go:172] (0xc00063e6e0) (1) Data frame handling\nI0720 00:00:22.135424 1085 log.go:172] (0xc00063e6e0) (1) Data frame sent\nI0720 00:00:22.135451 1085 log.go:172] (0xc0009c4420) (0xc00063e6e0) Stream removed, broadcasting: 1\nI0720 00:00:22.135502 1085 log.go:172] (0xc0009c4420) Go away received\nI0720 00:00:22.135879 1085 log.go:172] (0xc0009c4420) (0xc00063e6e0) Stream removed, broadcasting: 1\nI0720 00:00:22.135896 1085 log.go:172] (0xc0009c4420) (0xc00063e000) Stream removed, broadcasting: 3\nI0720 00:00:22.135904 1085 log.go:172] (0xc0009c4420) (0xc00063e0a0) Stream removed, broadcasting: 5\n" Jul 20 00:00:22.140: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 20 00:00:22.140: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 20 00:00:22.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-353 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 20 00:00:22.325: INFO: stderr: "I0720 00:00:22.259742 1107 log.go:172] (0xc0008ce420) (0xc00059e820) Create stream\nI0720 00:00:22.259806 1107 log.go:172] (0xc0008ce420) (0xc00059e820) Stream added, broadcasting: 1\nI0720 00:00:22.262004 1107 log.go:172] (0xc0008ce420) Reply frame received for 1\nI0720 00:00:22.262045 1107 log.go:172] (0xc0008ce420) (0xc0008a8000) Create stream\nI0720 00:00:22.262056 1107 log.go:172] (0xc0008ce420) (0xc0008a8000) Stream added, broadcasting: 3\nI0720 00:00:22.262719 1107 log.go:172] (0xc0008ce420) Reply frame received for 3\nI0720 00:00:22.262750 1107 log.go:172] (0xc0008ce420) (0xc00059e8c0) Create stream\nI0720 00:00:22.262760 1107 log.go:172] (0xc0008ce420) (0xc00059e8c0) Stream added, broadcasting: 5\nI0720 00:00:22.263558 1107 log.go:172] (0xc0008ce420) Reply frame received for 5\nI0720 00:00:22.318332 1107 log.go:172] (0xc0008ce420) Data frame received for 3\nI0720 00:00:22.318374 1107 log.go:172] (0xc0008a8000) (3) Data frame handling\nI0720 00:00:22.318386 1107 log.go:172] (0xc0008a8000) (3) Data frame sent\nI0720 00:00:22.318396 1107 log.go:172] (0xc0008ce420) Data frame received for 3\nI0720 00:00:22.318404 1107 log.go:172] (0xc0008a8000) (3) Data frame handling\nI0720 00:00:22.318433 1107 log.go:172] (0xc0008ce420) Data frame received for 5\nI0720 00:00:22.318443 1107 log.go:172] (0xc00059e8c0) (5) Data frame handling\nI0720 00:00:22.318457 1107 log.go:172] (0xc00059e8c0) (5) Data frame sent\nI0720 00:00:22.318472 1107 log.go:172] (0xc0008ce420) Data frame received for 5\nI0720 00:00:22.318481 1107 log.go:172] (0xc00059e8c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0720 00:00:22.319817 1107 log.go:172] (0xc0008ce420) Data frame received for 1\nI0720 00:00:22.319837 1107 log.go:172] (0xc00059e820) (1) Data frame handling\nI0720 00:00:22.319851 1107 log.go:172] (0xc00059e820) (1) Data frame sent\nI0720 00:00:22.319869 1107 log.go:172] (0xc0008ce420) (0xc00059e820) Stream removed, broadcasting: 1\nI0720 00:00:22.319892 1107 log.go:172] (0xc0008ce420) Go away received\nI0720 00:00:22.320401 1107 log.go:172] (0xc0008ce420) (0xc00059e820) Stream removed, broadcasting: 1\nI0720 00:00:22.320420 1107 log.go:172] (0xc0008ce420) (0xc0008a8000) Stream removed, broadcasting: 3\nI0720 00:00:22.320430 1107 log.go:172] (0xc0008ce420) (0xc00059e8c0) Stream removed, broadcasting: 5\n" Jul 20 00:00:22.325: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 20 00:00:22.325: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 20 00:00:22.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-353 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 20 00:00:22.696: INFO: stderr: "I0720 00:00:22.473004 1125 log.go:172] (0xc000a9e2c0) (0xc00091e640) Create stream\nI0720 00:00:22.473079 1125 log.go:172] (0xc000a9e2c0) (0xc00091e640) Stream added, broadcasting: 1\nI0720 00:00:22.476556 1125 log.go:172] (0xc000a9e2c0) Reply frame received for 1\nI0720 00:00:22.476596 1125 log.go:172] (0xc000a9e2c0) (0xc00091e6e0) Create stream\nI0720 00:00:22.476607 1125 log.go:172] (0xc000a9e2c0) (0xc00091e6e0) Stream added, broadcasting: 3\nI0720 00:00:22.477581 1125 log.go:172] (0xc000a9e2c0) Reply frame received for 3\nI0720 00:00:22.477616 1125 log.go:172] (0xc000a9e2c0) (0xc0008ce000) Create stream\nI0720 00:00:22.477625 1125 log.go:172] (0xc000a9e2c0) (0xc0008ce000) Stream added, broadcasting: 5\nI0720 00:00:22.478329 1125 log.go:172] (0xc000a9e2c0) Reply frame received for 5\nI0720 00:00:22.535778 1125 log.go:172] (0xc000a9e2c0) Data frame received for 5\nI0720 00:00:22.535809 1125 log.go:172] (0xc0008ce000) (5) Data frame handling\nI0720 00:00:22.535829 1125 log.go:172] (0xc0008ce000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0720 00:00:22.687895 1125 log.go:172] (0xc000a9e2c0) Data frame received for 3\nI0720 00:00:22.687944 1125 log.go:172] (0xc00091e6e0) (3) Data frame handling\nI0720 00:00:22.687980 1125 log.go:172] (0xc00091e6e0) (3) Data frame sent\nI0720 00:00:22.687997 1125 log.go:172] (0xc000a9e2c0) Data frame received for 3\nI0720 00:00:22.688010 1125 log.go:172] (0xc00091e6e0) (3) Data frame handling\nI0720 00:00:22.688858 1125 log.go:172] (0xc000a9e2c0) Data frame received for 5\nI0720 00:00:22.688887 1125 log.go:172] (0xc0008ce000) (5) Data frame handling\nI0720 00:00:22.690489 1125 log.go:172] (0xc000a9e2c0) Data frame received for 1\nI0720 00:00:22.690604 1125 log.go:172] (0xc00091e640) (1) Data frame handling\nI0720 00:00:22.690638 1125 log.go:172] (0xc00091e640) (1) Data frame sent\nI0720 00:00:22.690777 1125 log.go:172] (0xc000a9e2c0) (0xc00091e640) Stream removed, broadcasting: 1\nI0720 00:00:22.690827 1125 log.go:172] (0xc000a9e2c0) Go away received\nI0720 00:00:22.691305 1125 log.go:172] (0xc000a9e2c0) (0xc00091e640) Stream removed, broadcasting: 1\nI0720 00:00:22.691345 1125 log.go:172] (0xc000a9e2c0) (0xc00091e6e0) Stream removed, broadcasting: 3\nI0720 00:00:22.691370 1125 log.go:172] (0xc000a9e2c0) (0xc0008ce000) Stream removed, broadcasting: 5\n" Jul 20 00:00:22.696: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 20 00:00:22.696: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 20 00:00:22.696: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jul 20 00:01:02.868: INFO: Deleting all statefulset in ns statefulset-353 Jul 20 00:01:02.872: INFO: Scaling statefulset ss to 0 Jul 20 00:01:02.883: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 00:01:02.886: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:01:02.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-353" for this suite. Jul 20 00:01:08.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:01:09.019: INFO: namespace statefulset-353 deletion completed in 6.105578269s • [SLOW TEST:120.519 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:01:09.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jul 20 00:01:15.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-93e472bd-763c-4c41-a551-186acf3ad7dc -c busybox-main-container --namespace=emptydir-9341 -- cat /usr/share/volumeshare/shareddata.txt' Jul 20 00:01:15.392: INFO: stderr: "I0720 00:01:15.289179 1145 log.go:172] (0xc000a14370) (0xc00093a820) Create stream\nI0720 00:01:15.289249 1145 log.go:172] (0xc000a14370) (0xc00093a820) Stream added, broadcasting: 1\nI0720 00:01:15.291890 1145 log.go:172] (0xc000a14370) Reply frame received for 1\nI0720 00:01:15.291933 1145 log.go:172] (0xc000a14370) (0xc000670320) Create stream\nI0720 00:01:15.291947 1145 log.go:172] (0xc000a14370) (0xc000670320) Stream added, broadcasting: 3\nI0720 00:01:15.293151 1145 log.go:172] (0xc000a14370) Reply frame received for 3\nI0720 00:01:15.293199 1145 log.go:172] (0xc000a14370) (0xc0006703c0) Create stream\nI0720 00:01:15.293216 1145 log.go:172] (0xc000a14370) (0xc0006703c0) Stream added, broadcasting: 5\nI0720 00:01:15.294386 1145 log.go:172] (0xc000a14370) Reply frame received for 5\nI0720 00:01:15.381117 1145 log.go:172] (0xc000a14370) Data frame received for 3\nI0720 00:01:15.381163 1145 log.go:172] (0xc000670320) (3) Data frame handling\nI0720 00:01:15.381180 1145 log.go:172] (0xc000670320) (3) Data frame sent\nI0720 00:01:15.381194 1145 log.go:172] (0xc000a14370) Data frame received for 3\nI0720 00:01:15.381205 1145 log.go:172] (0xc000670320) (3) Data frame handling\nI0720 00:01:15.381243 1145 log.go:172] (0xc000a14370) Data frame received for 5\nI0720 00:01:15.381257 1145 log.go:172] (0xc0006703c0) (5) Data frame handling\nI0720 00:01:15.383035 1145 log.go:172] (0xc000a14370) Data frame received for 1\nI0720 00:01:15.383072 1145 log.go:172] (0xc00093a820) (1) Data frame handling\nI0720 00:01:15.383099 1145 log.go:172] (0xc00093a820) (1) Data frame sent\nI0720 00:01:15.383122 1145 log.go:172] (0xc000a14370) (0xc00093a820) Stream removed, broadcasting: 1\nI0720 00:01:15.383370 1145 log.go:172] (0xc000a14370) Go away received\nI0720 00:01:15.383701 1145 log.go:172] (0xc000a14370) (0xc00093a820) Stream removed, broadcasting: 1\nI0720 00:01:15.383740 1145 log.go:172] (0xc000a14370) (0xc000670320) Stream removed, broadcasting: 3\nI0720 00:01:15.383763 1145 log.go:172] (0xc000a14370) (0xc0006703c0) Stream removed, broadcasting: 5\n" Jul 20 00:01:15.392: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:01:15.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9341" for this suite. Jul 20 00:01:21.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:01:21.486: INFO: namespace emptydir-9341 deletion completed in 6.089574461s • [SLOW TEST:12.467 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:01:21.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-2b1721fa-55a1-4503-9ae7-c8698fb60c78 Jul 20 00:01:21.663: INFO: Pod name my-hostname-basic-2b1721fa-55a1-4503-9ae7-c8698fb60c78: Found 0 pods out of 1 Jul 20 00:01:26.667: INFO: Pod name my-hostname-basic-2b1721fa-55a1-4503-9ae7-c8698fb60c78: Found 1 pods out of 1 Jul 20 00:01:26.667: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2b1721fa-55a1-4503-9ae7-c8698fb60c78" are running Jul 20 00:01:26.670: INFO: Pod "my-hostname-basic-2b1721fa-55a1-4503-9ae7-c8698fb60c78-mtp7j" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 00:01:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 00:01:26 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 00:01:26 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 00:01:21 +0000 UTC Reason: Message:}]) Jul 20 00:01:26.670: INFO: Trying to dial the pod Jul 20 00:01:31.681: INFO: Controller my-hostname-basic-2b1721fa-55a1-4503-9ae7-c8698fb60c78: Got expected result from replica 1 [my-hostname-basic-2b1721fa-55a1-4503-9ae7-c8698fb60c78-mtp7j]: "my-hostname-basic-2b1721fa-55a1-4503-9ae7-c8698fb60c78-mtp7j", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:01:31.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6485" for this suite. Jul 20 00:01:37.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:01:37.802: INFO: namespace replication-controller-6485 deletion completed in 6.116892285s • [SLOW TEST:16.315 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:01:37.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jul 20 00:01:38.009: INFO: Waiting up to 5m0s for pod "client-containers-9df52fd5-2bda-462c-bec5-183378e939a2" in namespace "containers-1031" to be "success or failure" Jul 20 00:01:38.017: INFO: Pod "client-containers-9df52fd5-2bda-462c-bec5-183378e939a2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.734173ms Jul 20 00:01:40.020: INFO: Pod "client-containers-9df52fd5-2bda-462c-bec5-183378e939a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010972352s Jul 20 00:01:42.024: INFO: Pod "client-containers-9df52fd5-2bda-462c-bec5-183378e939a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014806658s Jul 20 00:01:44.030: INFO: Pod "client-containers-9df52fd5-2bda-462c-bec5-183378e939a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021291916s STEP: Saw pod success Jul 20 00:01:44.030: INFO: Pod "client-containers-9df52fd5-2bda-462c-bec5-183378e939a2" satisfied condition "success or failure" Jul 20 00:01:44.033: INFO: Trying to get logs from node iruya-worker2 pod client-containers-9df52fd5-2bda-462c-bec5-183378e939a2 container test-container: STEP: delete the pod Jul 20 00:01:44.080: INFO: Waiting for pod client-containers-9df52fd5-2bda-462c-bec5-183378e939a2 to disappear Jul 20 00:01:44.088: INFO: Pod client-containers-9df52fd5-2bda-462c-bec5-183378e939a2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:01:44.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1031" for this suite. Jul 20 00:01:50.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:01:50.193: INFO: namespace containers-1031 deletion completed in 6.100678686s • [SLOW TEST:12.390 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:01:50.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e4d7d47a-a285-436b-b9f2-0cdfa925d955 STEP: Creating a pod to test consume secrets Jul 20 00:01:50.388: INFO: Waiting up to 5m0s for pod "pod-secrets-4b4f8d02-d3e2-451b-bf65-852286dec397" in namespace "secrets-8095" to be "success or failure" Jul 20 00:01:50.406: INFO: Pod "pod-secrets-4b4f8d02-d3e2-451b-bf65-852286dec397": Phase="Pending", Reason="", readiness=false. Elapsed: 18.042914ms Jul 20 00:01:52.642: INFO: Pod "pod-secrets-4b4f8d02-d3e2-451b-bf65-852286dec397": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253630934s Jul 20 00:01:54.646: INFO: Pod "pod-secrets-4b4f8d02-d3e2-451b-bf65-852286dec397": Phase="Running", Reason="", readiness=true. Elapsed: 4.257966113s Jul 20 00:01:56.650: INFO: Pod "pod-secrets-4b4f8d02-d3e2-451b-bf65-852286dec397": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.261664011s STEP: Saw pod success Jul 20 00:01:56.650: INFO: Pod "pod-secrets-4b4f8d02-d3e2-451b-bf65-852286dec397" satisfied condition "success or failure" Jul 20 00:01:56.652: INFO: Trying to get logs from node iruya-worker pod pod-secrets-4b4f8d02-d3e2-451b-bf65-852286dec397 container secret-volume-test: STEP: delete the pod Jul 20 00:01:56.691: INFO: Waiting for pod pod-secrets-4b4f8d02-d3e2-451b-bf65-852286dec397 to disappear Jul 20 00:01:56.743: INFO: Pod pod-secrets-4b4f8d02-d3e2-451b-bf65-852286dec397 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:01:56.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8095" for this suite. Jul 20 00:02:04.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:02:04.924: INFO: namespace secrets-8095 deletion completed in 8.176854332s STEP: Destroying namespace "secret-namespace-6214" for this suite. Jul 20 00:02:11.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:02:11.235: INFO: namespace secret-namespace-6214 deletion completed in 6.310187901s • [SLOW TEST:21.042 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:02:11.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-6xm6c in namespace proxy-2530 I0720 00:02:11.578937 6 runners.go:180] Created replication controller with name: proxy-service-6xm6c, namespace: proxy-2530, replica count: 1 I0720 00:02:12.629376 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 00:02:13.629555 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 00:02:14.629721 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 00:02:15.629951 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 00:02:16.630165 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 00:02:17.630452 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 00:02:18.630666 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 00:02:19.630895 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 00:02:20.631151 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 00:02:21.631382 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 00:02:22.631633 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0720 00:02:23.631877 6 runners.go:180] proxy-service-6xm6c Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 20 00:02:23.635: INFO: setup took 12.232219814s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jul 20 00:02:23.642: INFO: (0) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 6.53536ms) Jul 20 00:02:23.644: INFO: (0) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 8.445519ms) Jul 20 00:02:23.644: INFO: (0) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 8.868203ms) Jul 20 00:02:23.644: INFO: (0) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 8.865807ms) Jul 20 00:02:23.644: INFO: (0) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 8.899834ms) Jul 20 00:02:23.644: INFO: (0) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 9.034082ms) Jul 20 00:02:23.645: INFO: (0) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 8.982058ms) Jul 20 00:02:23.645: INFO: (0) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 9.12651ms) Jul 20 00:02:23.645: INFO: (0) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 9.523225ms) Jul 20 00:02:23.645: INFO: (0) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 9.654022ms) Jul 20 00:02:23.645: INFO: (0) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 9.746988ms) Jul 20 00:02:23.649: INFO: (0) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 13.980419ms) Jul 20 00:02:23.650: INFO: (0) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 13.943713ms) Jul 20 00:02:23.651: INFO: (0) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: ... (200; 32.491205ms) Jul 20 00:02:23.685: INFO: (1) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 32.405011ms) Jul 20 00:02:23.685: INFO: (1) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 32.487259ms) Jul 20 00:02:23.685: INFO: (1) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 32.57302ms) Jul 20 00:02:23.685: INFO: (1) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test<... (200; 33.144928ms) Jul 20 00:02:23.687: INFO: (1) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 34.656746ms) Jul 20 00:02:23.687: INFO: (1) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 34.649818ms) Jul 20 00:02:23.687: INFO: (1) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 34.901653ms) Jul 20 00:02:23.687: INFO: (1) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 34.967263ms) Jul 20 00:02:23.687: INFO: (1) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 34.826177ms) Jul 20 00:02:23.688: INFO: (1) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 35.331724ms) Jul 20 00:02:23.692: INFO: (2) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 4.696554ms) Jul 20 00:02:23.692: INFO: (2) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 4.687841ms) Jul 20 00:02:23.695: INFO: (2) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 6.967123ms) Jul 20 00:02:23.695: INFO: (2) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 7.009999ms) Jul 20 00:02:23.695: INFO: (2) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test (200; 8.133088ms) Jul 20 00:02:23.696: INFO: (2) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 8.160148ms) Jul 20 00:02:23.696: INFO: (2) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 8.261205ms) Jul 20 00:02:23.696: INFO: (2) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 8.346919ms) Jul 20 00:02:23.697: INFO: (2) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 8.970284ms) Jul 20 00:02:23.697: INFO: (2) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 9.523706ms) Jul 20 00:02:23.697: INFO: (2) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 9.528133ms) Jul 20 00:02:23.698: INFO: (2) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 9.824646ms) Jul 20 00:02:23.698: INFO: (2) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 9.874568ms) Jul 20 00:02:23.698: INFO: (2) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 9.965355ms) Jul 20 00:02:23.700: INFO: (3) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 2.571397ms) Jul 20 00:02:23.701: INFO: (3) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 3.123374ms) Jul 20 00:02:23.701: INFO: (3) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test (200; 4.818564ms) Jul 20 00:02:23.703: INFO: (3) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 4.898449ms) Jul 20 00:02:23.703: INFO: (3) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 4.941482ms) Jul 20 00:02:23.703: INFO: (3) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 4.969401ms) Jul 20 00:02:23.703: INFO: (3) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 5.08025ms) Jul 20 00:02:23.703: INFO: (3) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 5.385598ms) Jul 20 00:02:23.704: INFO: (3) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 6.265946ms) Jul 20 00:02:23.705: INFO: (3) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 6.442733ms) Jul 20 00:02:23.705: INFO: (3) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 6.523763ms) Jul 20 00:02:23.705: INFO: (3) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 6.688869ms) Jul 20 00:02:23.708: INFO: (4) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 3.21018ms) Jul 20 00:02:23.708: INFO: (4) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 3.644177ms) Jul 20 00:02:23.708: INFO: (4) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 3.636065ms) Jul 20 00:02:23.709: INFO: (4) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 3.662869ms) Jul 20 00:02:23.709: INFO: (4) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 3.754557ms) Jul 20 00:02:23.709: INFO: (4) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.869059ms) Jul 20 00:02:23.709: INFO: (4) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test (200; 3.982939ms) Jul 20 00:02:23.709: INFO: (4) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.992296ms) Jul 20 00:02:23.709: INFO: (4) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 4.070021ms) Jul 20 00:02:23.709: INFO: (4) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 4.23844ms) Jul 20 00:02:23.709: INFO: (4) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 4.277327ms) Jul 20 00:02:23.709: INFO: (4) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 4.327431ms) Jul 20 00:02:23.709: INFO: (4) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 4.579875ms) Jul 20 00:02:23.712: INFO: (5) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 2.777033ms) Jul 20 00:02:23.712: INFO: (5) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: ... (200; 3.587269ms) Jul 20 00:02:23.713: INFO: (5) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.656284ms) Jul 20 00:02:23.713: INFO: (5) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 3.646142ms) Jul 20 00:02:23.713: INFO: (5) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 3.642939ms) Jul 20 00:02:23.713: INFO: (5) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 3.764633ms) Jul 20 00:02:23.713: INFO: (5) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 3.752724ms) Jul 20 00:02:23.713: INFO: (5) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 3.851458ms) Jul 20 00:02:23.714: INFO: (5) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 4.164076ms) Jul 20 00:02:23.714: INFO: (5) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 4.447072ms) Jul 20 00:02:23.714: INFO: (5) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 4.629762ms) Jul 20 00:02:23.714: INFO: (5) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 4.62346ms) Jul 20 00:02:23.714: INFO: (5) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 4.597385ms) Jul 20 00:02:23.714: INFO: (5) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 4.686108ms) Jul 20 00:02:23.718: INFO: (6) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 4.089283ms) Jul 20 00:02:23.718: INFO: (6) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 4.099281ms) Jul 20 00:02:23.718: INFO: (6) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 4.257867ms) Jul 20 00:02:23.719: INFO: (6) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 4.684335ms) Jul 20 00:02:23.719: INFO: (6) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 4.929812ms) Jul 20 00:02:23.719: INFO: (6) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 4.957348ms) Jul 20 00:02:23.719: INFO: (6) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 5.071301ms) Jul 20 00:02:23.720: INFO: (6) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 5.375338ms) Jul 20 00:02:23.720: INFO: (6) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 5.550922ms) Jul 20 00:02:23.720: INFO: (6) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 5.621399ms) Jul 20 00:02:23.720: INFO: (6) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 5.581055ms) Jul 20 00:02:23.720: INFO: (6) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 5.622923ms) Jul 20 00:02:23.720: INFO: (6) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 5.686284ms) Jul 20 00:02:23.720: INFO: (6) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: ... (200; 4.467129ms) Jul 20 00:02:23.725: INFO: (7) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 4.280393ms) Jul 20 00:02:23.725: INFO: (7) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 4.239845ms) Jul 20 00:02:23.725: INFO: (7) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 4.583377ms) Jul 20 00:02:23.725: INFO: (7) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 4.327622ms) Jul 20 00:02:23.725: INFO: (7) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 4.355478ms) Jul 20 00:02:23.725: INFO: (7) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 4.329763ms) Jul 20 00:02:23.725: INFO: (7) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 4.495394ms) Jul 20 00:02:23.725: INFO: (7) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 4.553807ms) Jul 20 00:02:23.725: INFO: (7) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 4.813539ms) Jul 20 00:02:23.725: INFO: (7) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test<... (200; 3.867597ms) Jul 20 00:02:23.736: INFO: (8) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 10.767154ms) Jul 20 00:02:23.736: INFO: (8) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: ... (200; 12.21048ms) Jul 20 00:02:23.738: INFO: (8) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 12.161785ms) Jul 20 00:02:23.738: INFO: (8) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 12.222642ms) Jul 20 00:02:23.738: INFO: (8) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 12.276634ms) Jul 20 00:02:23.738: INFO: (8) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 12.205019ms) Jul 20 00:02:23.741: INFO: (9) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 2.977932ms) Jul 20 00:02:23.741: INFO: (9) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 3.105799ms) Jul 20 00:02:23.741: INFO: (9) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 3.270422ms) Jul 20 00:02:23.741: INFO: (9) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 3.345337ms) Jul 20 00:02:23.741: INFO: (9) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test<... (200; 3.340329ms) Jul 20 00:02:23.741: INFO: (9) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 3.383077ms) Jul 20 00:02:23.742: INFO: (9) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 3.856973ms) Jul 20 00:02:23.742: INFO: (9) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 3.784372ms) Jul 20 00:02:23.742: INFO: (9) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 4.026004ms) Jul 20 00:02:23.742: INFO: (9) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 4.077193ms) Jul 20 00:02:23.742: INFO: (9) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 4.053845ms) Jul 20 00:02:23.742: INFO: (9) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 4.12609ms) Jul 20 00:02:23.742: INFO: (9) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 4.224097ms) Jul 20 00:02:23.742: INFO: (9) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 4.278642ms) Jul 20 00:02:23.742: INFO: (9) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 4.308247ms) Jul 20 00:02:23.744: INFO: (10) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 1.999382ms) Jul 20 00:02:23.746: INFO: (10) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.385584ms) Jul 20 00:02:23.746: INFO: (10) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 3.436942ms) Jul 20 00:02:23.746: INFO: (10) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 3.418762ms) Jul 20 00:02:23.746: INFO: (10) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 3.401085ms) Jul 20 00:02:23.746: INFO: (10) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 3.492111ms) Jul 20 00:02:23.746: INFO: (10) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 3.608929ms) Jul 20 00:02:23.746: INFO: (10) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test<... (200; 4.076959ms) Jul 20 00:02:23.746: INFO: (10) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 4.209893ms) Jul 20 00:02:23.746: INFO: (10) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 4.266015ms) Jul 20 00:02:23.747: INFO: (10) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 4.403797ms) Jul 20 00:02:23.747: INFO: (10) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 4.518387ms) Jul 20 00:02:23.747: INFO: (10) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 4.488425ms) Jul 20 00:02:23.747: INFO: (10) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 4.686075ms) Jul 20 00:02:23.750: INFO: (11) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 2.851746ms) Jul 20 00:02:23.750: INFO: (11) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 2.844299ms) Jul 20 00:02:23.750: INFO: (11) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.000887ms) Jul 20 00:02:23.750: INFO: (11) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 3.013771ms) Jul 20 00:02:23.750: INFO: (11) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 3.019908ms) Jul 20 00:02:23.750: INFO: (11) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.062628ms) Jul 20 00:02:23.750: INFO: (11) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: ... (200; 3.502397ms) Jul 20 00:02:23.751: INFO: (11) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 4.39548ms) Jul 20 00:02:23.751: INFO: (11) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 4.366625ms) Jul 20 00:02:23.752: INFO: (11) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 4.609546ms) Jul 20 00:02:23.752: INFO: (11) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 4.517379ms) Jul 20 00:02:23.752: INFO: (11) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 4.543098ms) Jul 20 00:02:23.752: INFO: (11) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 4.701081ms) Jul 20 00:02:23.754: INFO: (12) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 2.265195ms) Jul 20 00:02:23.754: INFO: (12) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 2.277282ms) Jul 20 00:02:23.754: INFO: (12) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 2.68752ms) Jul 20 00:02:23.756: INFO: (12) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test<... (200; 4.306866ms) Jul 20 00:02:23.756: INFO: (12) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 4.26355ms) Jul 20 00:02:23.756: INFO: (12) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 4.470699ms) Jul 20 00:02:23.756: INFO: (12) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 4.56545ms) Jul 20 00:02:23.756: INFO: (12) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 4.636236ms) Jul 20 00:02:23.756: INFO: (12) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 4.597992ms) Jul 20 00:02:23.756: INFO: (12) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 4.561152ms) Jul 20 00:02:23.759: INFO: (13) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 2.623629ms) Jul 20 00:02:23.759: INFO: (13) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 2.858426ms) Jul 20 00:02:23.759: INFO: (13) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 2.924782ms) Jul 20 00:02:23.760: INFO: (13) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 3.11781ms) Jul 20 00:02:23.760: INFO: (13) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.14131ms) Jul 20 00:02:23.760: INFO: (13) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 3.421085ms) Jul 20 00:02:23.760: INFO: (13) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 3.4261ms) Jul 20 00:02:23.760: INFO: (13) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: ... (200; 2.658134ms) Jul 20 00:02:23.765: INFO: (14) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 3.037602ms) Jul 20 00:02:23.765: INFO: (14) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.028941ms) Jul 20 00:02:23.765: INFO: (14) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 3.225553ms) Jul 20 00:02:23.766: INFO: (14) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 4.159293ms) Jul 20 00:02:23.766: INFO: (14) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 4.246232ms) Jul 20 00:02:23.766: INFO: (14) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 4.289734ms) Jul 20 00:02:23.766: INFO: (14) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 4.330477ms) Jul 20 00:02:23.766: INFO: (14) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test (200; 4.760569ms) Jul 20 00:02:23.767: INFO: (14) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 4.880511ms) Jul 20 00:02:23.767: INFO: (14) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 4.765336ms) Jul 20 00:02:23.767: INFO: (14) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 4.812444ms) Jul 20 00:02:23.767: INFO: (14) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 4.767545ms) Jul 20 00:02:23.767: INFO: (14) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 4.859531ms) Jul 20 00:02:23.767: INFO: (14) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 5.047764ms) Jul 20 00:02:23.769: INFO: (15) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 2.088384ms) Jul 20 00:02:23.771: INFO: (15) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 3.607934ms) Jul 20 00:02:23.771: INFO: (15) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.633132ms) Jul 20 00:02:23.771: INFO: (15) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 3.685214ms) Jul 20 00:02:23.771: INFO: (15) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 3.683447ms) Jul 20 00:02:23.771: INFO: (15) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 3.69565ms) Jul 20 00:02:23.771: INFO: (15) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 4.492194ms) Jul 20 00:02:23.771: INFO: (15) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 4.571126ms) Jul 20 00:02:23.772: INFO: (15) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: ... (200; 3.834217ms) Jul 20 00:02:23.776: INFO: (16) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 3.878294ms) Jul 20 00:02:23.776: INFO: (16) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test (200; 4.062339ms) Jul 20 00:02:23.776: INFO: (16) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 4.109101ms) Jul 20 00:02:23.776: INFO: (16) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 3.885667ms) Jul 20 00:02:23.778: INFO: (16) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 5.697713ms) Jul 20 00:02:23.778: INFO: (16) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 5.940823ms) Jul 20 00:02:23.779: INFO: (16) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 6.466184ms) Jul 20 00:02:23.779: INFO: (16) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 6.57991ms) Jul 20 00:02:23.779: INFO: (16) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 6.582174ms) Jul 20 00:02:23.779: INFO: (16) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 7.091106ms) Jul 20 00:02:23.783: INFO: (17) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.467069ms) Jul 20 00:02:23.783: INFO: (17) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 3.776452ms) Jul 20 00:02:23.783: INFO: (17) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 3.830656ms) Jul 20 00:02:23.783: INFO: (17) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 3.774656ms) Jul 20 00:02:23.783: INFO: (17) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.781853ms) Jul 20 00:02:23.783: INFO: (17) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 4.028115ms) Jul 20 00:02:23.783: INFO: (17) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 4.044476ms) Jul 20 00:02:23.783: INFO: (17) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 4.132397ms) Jul 20 00:02:23.784: INFO: (17) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test<... (200; 5.152549ms) Jul 20 00:02:23.785: INFO: (17) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 5.205778ms) Jul 20 00:02:23.785: INFO: (17) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 5.22938ms) Jul 20 00:02:23.785: INFO: (17) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 5.404761ms) Jul 20 00:02:23.787: INFO: (18) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:1080/proxy/: test<... (200; 1.880915ms) Jul 20 00:02:23.787: INFO: (18) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 2.283429ms) Jul 20 00:02:23.789: INFO: (18) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 4.580495ms) Jul 20 00:02:23.789: INFO: (18) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname2/proxy/: bar (200; 4.585621ms) Jul 20 00:02:23.789: INFO: (18) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:162/proxy/: bar (200; 4.586697ms) Jul 20 00:02:23.790: INFO: (18) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 4.642043ms) Jul 20 00:02:23.790: INFO: (18) /api/v1/namespaces/proxy-2530/services/proxy-service-6xm6c:portname1/proxy/: foo (200; 4.761677ms) Jul 20 00:02:23.790: INFO: (18) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 4.660253ms) Jul 20 00:02:23.790: INFO: (18) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 4.700191ms) Jul 20 00:02:23.790: INFO: (18) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn/proxy/: test (200; 4.745232ms) Jul 20 00:02:23.790: INFO: (18) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname1/proxy/: foo (200; 4.725348ms) Jul 20 00:02:23.790: INFO: (18) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test<... (200; 2.981255ms) Jul 20 00:02:23.793: INFO: (19) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:462/proxy/: tls qux (200; 3.278398ms) Jul 20 00:02:23.793: INFO: (19) /api/v1/namespaces/proxy-2530/pods/proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.333768ms) Jul 20 00:02:23.793: INFO: (19) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:160/proxy/: foo (200; 3.532098ms) Jul 20 00:02:23.793: INFO: (19) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:460/proxy/: tls baz (200; 3.610003ms) Jul 20 00:02:23.793: INFO: (19) /api/v1/namespaces/proxy-2530/services/http:proxy-service-6xm6c:portname2/proxy/: bar (200; 3.678976ms) Jul 20 00:02:23.793: INFO: (19) /api/v1/namespaces/proxy-2530/pods/http:proxy-service-6xm6c-xq9mn:1080/proxy/: ... (200; 3.692061ms) Jul 20 00:02:23.793: INFO: (19) /api/v1/namespaces/proxy-2530/pods/https:proxy-service-6xm6c-xq9mn:443/proxy/: test (200; 4.226222ms) Jul 20 00:02:23.794: INFO: (19) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname1/proxy/: tls baz (200; 4.23632ms) Jul 20 00:02:23.794: INFO: (19) /api/v1/namespaces/proxy-2530/services/https:proxy-service-6xm6c:tlsportname2/proxy/: tls qux (200; 4.329206ms) STEP: deleting ReplicationController proxy-service-6xm6c in namespace proxy-2530, will wait for the garbage collector to delete the pods Jul 20 00:02:23.851: INFO: Deleting ReplicationController proxy-service-6xm6c took: 5.266469ms Jul 20 00:02:24.151: INFO: Terminating ReplicationController proxy-service-6xm6c pods took: 300.260957ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:02:35.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2530" for this suite. Jul 20 00:02:41.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:02:41.277: INFO: namespace proxy-2530 deletion completed in 6.120822374s • [SLOW TEST:30.042 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:02:41.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 20 00:02:41.435: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e613f70b-f406-451f-b92f-b05038d5da87" in namespace "projected-1394" to be "success or failure" Jul 20 00:02:41.462: INFO: Pod "downwardapi-volume-e613f70b-f406-451f-b92f-b05038d5da87": Phase="Pending", Reason="", readiness=false. Elapsed: 27.081968ms Jul 20 00:02:43.486: INFO: Pod "downwardapi-volume-e613f70b-f406-451f-b92f-b05038d5da87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051225877s Jul 20 00:02:45.489: INFO: Pod "downwardapi-volume-e613f70b-f406-451f-b92f-b05038d5da87": Phase="Running", Reason="", readiness=true. Elapsed: 4.053507126s Jul 20 00:02:47.492: INFO: Pod "downwardapi-volume-e613f70b-f406-451f-b92f-b05038d5da87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057262384s STEP: Saw pod success Jul 20 00:02:47.492: INFO: Pod "downwardapi-volume-e613f70b-f406-451f-b92f-b05038d5da87" satisfied condition "success or failure" Jul 20 00:02:47.495: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e613f70b-f406-451f-b92f-b05038d5da87 container client-container: STEP: delete the pod Jul 20 00:02:47.758: INFO: Waiting for pod downwardapi-volume-e613f70b-f406-451f-b92f-b05038d5da87 to disappear Jul 20 00:02:47.893: INFO: Pod downwardapi-volume-e613f70b-f406-451f-b92f-b05038d5da87 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:02:47.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1394" for this suite. Jul 20 00:02:55.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:02:55.986: INFO: namespace projected-1394 deletion completed in 8.089295095s • [SLOW TEST:14.708 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:02:55.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 20 00:02:56.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8996' Jul 20 00:02:56.867: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 20 00:02:56.867: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Jul 20 00:02:57.216: INFO: scanned /root for discovery docs: Jul 20 00:02:57.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8996' Jul 20 00:03:17.780: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 20 00:03:17.780: INFO: stdout: "Created e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54\nScaling up e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jul 20 00:03:17.780: INFO: stdout: "Created e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54\nScaling up e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jul 20 00:03:17.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8996' Jul 20 00:03:17.874: INFO: stderr: "" Jul 20 00:03:17.874: INFO: stdout: "e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54-zl9r4 " Jul 20 00:03:17.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54-zl9r4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8996' Jul 20 00:03:17.960: INFO: stderr: "" Jul 20 00:03:17.960: INFO: stdout: "true" Jul 20 00:03:17.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54-zl9r4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8996' Jul 20 00:03:18.058: INFO: stderr: "" Jul 20 00:03:18.058: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jul 20 00:03:18.058: INFO: e2e-test-nginx-rc-08858af06be7d642aabacc0ba18bdb54-zl9r4 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jul 20 00:03:18.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8996' Jul 20 00:03:18.168: INFO: stderr: "" Jul 20 00:03:18.168: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:03:18.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8996" for this suite. Jul 20 00:03:24.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:03:24.284: INFO: namespace kubectl-8996 deletion completed in 6.112813561s • [SLOW TEST:28.298 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:03:24.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jul 20 00:03:24.350: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 00:03:24.365: INFO: Waiting for terminating namespaces to be deleted... Jul 20 00:03:24.368: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jul 20 00:03:24.374: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded) Jul 20 00:03:24.374: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 00:03:24.374: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded) Jul 20 00:03:24.374: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 00:03:24.374: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jul 20 00:03:24.379: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded) Jul 20 00:03:24.379: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 00:03:24.379: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded) Jul 20 00:03:24.379: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Jul 20 00:03:24.485: INFO: Pod kindnet-8kg9z requesting resource cpu=100m on Node iruya-worker2 Jul 20 00:03:24.485: INFO: Pod kindnet-k7tjm requesting resource cpu=100m on Node iruya-worker Jul 20 00:03:24.485: INFO: Pod kube-proxy-9ktgx requesting resource cpu=0m on Node iruya-worker2 Jul 20 00:03:24.485: INFO: Pod kube-proxy-jzrnl requesting resource cpu=0m on Node iruya-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6bcdd8ee-d802-428a-87d6-fca93f2a198e.16234d0b6e108e07], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3879/filler-pod-6bcdd8ee-d802-428a-87d6-fca93f2a198e to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-6bcdd8ee-d802-428a-87d6-fca93f2a198e.16234d0bbb1b7a43], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6bcdd8ee-d802-428a-87d6-fca93f2a198e.16234d0c0ca1dff4], Reason = [Created], Message = [Created container filler-pod-6bcdd8ee-d802-428a-87d6-fca93f2a198e] STEP: Considering event: Type = [Normal], Name = [filler-pod-6bcdd8ee-d802-428a-87d6-fca93f2a198e.16234d0c25508fe9], Reason = [Started], Message = [Started container filler-pod-6bcdd8ee-d802-428a-87d6-fca93f2a198e] STEP: Considering event: Type = [Normal], Name = [filler-pod-72e6a871-8787-4d9a-95c0-c95923dc22ce.16234d0b702e274e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3879/filler-pod-72e6a871-8787-4d9a-95c0-c95923dc22ce to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-72e6a871-8787-4d9a-95c0-c95923dc22ce.16234d0bea70929a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-72e6a871-8787-4d9a-95c0-c95923dc22ce.16234d0c2d32c67a], Reason = [Created], Message = [Created container filler-pod-72e6a871-8787-4d9a-95c0-c95923dc22ce] STEP: Considering event: Type = [Normal], Name = [filler-pod-72e6a871-8787-4d9a-95c0-c95923dc22ce.16234d0c3ad0595c], Reason = [Started], Message = [Started container filler-pod-72e6a871-8787-4d9a-95c0-c95923dc22ce] STEP: Considering event: Type = [Warning], Name = [additional-pod.16234d0cd6cb3da2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:03:31.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3879" for this suite. Jul 20 00:03:37.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:03:37.739: INFO: namespace sched-pred-3879 deletion completed in 6.10631082s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.454 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:03:37.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6a1e55f7-2145-4c7d-85e5-1e89043d032e STEP: Creating a pod to test consume secrets Jul 20 00:03:37.889: INFO: Waiting up to 5m0s for pod "pod-secrets-f776786a-dc5d-4594-ab06-4807c04d16cd" in namespace "secrets-3046" to be "success or failure" Jul 20 00:03:37.984: INFO: Pod "pod-secrets-f776786a-dc5d-4594-ab06-4807c04d16cd": Phase="Pending", Reason="", readiness=false. Elapsed: 95.247082ms Jul 20 00:03:39.989: INFO: Pod "pod-secrets-f776786a-dc5d-4594-ab06-4807c04d16cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099780751s Jul 20 00:03:41.993: INFO: Pod "pod-secrets-f776786a-dc5d-4594-ab06-4807c04d16cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104185663s STEP: Saw pod success Jul 20 00:03:41.993: INFO: Pod "pod-secrets-f776786a-dc5d-4594-ab06-4807c04d16cd" satisfied condition "success or failure" Jul 20 00:03:41.996: INFO: Trying to get logs from node iruya-worker pod pod-secrets-f776786a-dc5d-4594-ab06-4807c04d16cd container secret-env-test: STEP: delete the pod Jul 20 00:03:42.016: INFO: Waiting for pod pod-secrets-f776786a-dc5d-4594-ab06-4807c04d16cd to disappear Jul 20 00:03:42.020: INFO: Pod pod-secrets-f776786a-dc5d-4594-ab06-4807c04d16cd no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:03:42.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3046" for this suite. Jul 20 00:03:48.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:03:48.138: INFO: namespace secrets-3046 deletion completed in 6.114213s • [SLOW TEST:10.398 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:03:48.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7933 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jul 20 00:03:48.282: INFO: Found 0 stateful pods, waiting for 3 Jul 20 00:03:58.287: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 00:03:58.287: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 00:03:58.287: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 20 00:04:08.287: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 00:04:08.287: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 00:04:08.287: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 20 00:04:08.316: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jul 20 00:04:18.394: INFO: Updating stateful set ss2 Jul 20 00:04:18.440: INFO: Waiting for Pod statefulset-7933/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jul 20 00:04:28.620: INFO: Found 2 stateful pods, waiting for 3 Jul 20 00:04:38.626: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 00:04:38.626: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 00:04:38.626: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jul 20 00:04:38.648: INFO: Updating stateful set ss2 Jul 20 00:04:38.722: INFO: Waiting for Pod statefulset-7933/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 20 00:04:48.746: INFO: Updating stateful set ss2 Jul 20 00:04:49.075: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update Jul 20 00:04:49.075: INFO: Waiting for Pod statefulset-7933/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 20 00:04:59.083: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jul 20 00:05:09.084: INFO: Deleting all statefulset in ns statefulset-7933 Jul 20 00:05:09.086: INFO: Scaling statefulset ss2 to 0 Jul 20 00:05:29.105: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 00:05:29.107: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:05:29.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7933" for this suite. Jul 20 00:05:37.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:05:37.216: INFO: namespace statefulset-7933 deletion completed in 8.087718996s • [SLOW TEST:109.077 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:05:37.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 20 00:05:37.263: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jul 20 00:05:37.310: INFO: Pod name sample-pod: Found 0 pods out of 1 Jul 20 00:05:42.314: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 20 00:05:42.315: INFO: Creating deployment "test-rolling-update-deployment" Jul 20 00:05:42.319: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jul 20 00:05:42.367: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jul 20 00:05:44.385: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jul 20 00:05:44.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730800342, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730800342, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730800342, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730800342, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 00:05:46.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730800342, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730800342, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730800342, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730800342, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 00:05:48.618: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jul 20 00:05:48.625: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-9570,SelfLink:/apis/apps/v1/namespaces/deployment-9570/deployments/test-rolling-update-deployment,UID:df83d87d-1817-421e-8549-44a8d8bec32f,ResourceVersion:40757,Generation:1,CreationTimestamp:2020-07-20 00:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-20 00:05:42 +0000 UTC 2020-07-20 00:05:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-20 00:05:47 +0000 UTC 2020-07-20 00:05:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 20 00:05:48.628: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-9570,SelfLink:/apis/apps/v1/namespaces/deployment-9570/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:f39fc594-5173-409b-ba40-9bbc0c41ede6,ResourceVersion:40742,Generation:1,CreationTimestamp:2020-07-20 00:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment df83d87d-1817-421e-8549-44a8d8bec32f 0xc0026ae757 0xc0026ae758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 20 00:05:48.628: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jul 20 00:05:48.628: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-9570,SelfLink:/apis/apps/v1/namespaces/deployment-9570/replicasets/test-rolling-update-controller,UID:0cbd43b6-da2f-4b6d-bb48-9c25edd7d84e,ResourceVersion:40755,Generation:2,CreationTimestamp:2020-07-20 00:05:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment df83d87d-1817-421e-8549-44a8d8bec32f 0xc0026ae687 0xc0026ae688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 20 00:05:48.631: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-h88fd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-h88fd,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-9570,SelfLink:/api/v1/namespaces/deployment-9570/pods/test-rolling-update-deployment-79f6b9d75c-h88fd,UID:d7cf04e9-dd63-4308-bf5d-7183476bcf9f,ResourceVersion:40741,Generation:0,CreationTimestamp:2020-07-20 00:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c f39fc594-5173-409b-ba40-9bbc0c41ede6 0xc0026af077 0xc0026af078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fv9sc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fv9sc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-fv9sc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026af170} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026af190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:05:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:05:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:05:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:05:42 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.39,StartTime:2020-07-20 00:05:42 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-20 00:05:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://89edec7598b88dd8f79780560fcd84381d8b5fbaff93be224d2525f94b3181fb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:05:48.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9570" for this suite. Jul 20 00:05:54.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:05:54.786: INFO: namespace deployment-9570 deletion completed in 6.151983419s • [SLOW TEST:17.570 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:05:54.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-dceffa07-7174-4f3f-92e3-b05f4d705091 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:05:54.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-179" for this suite. Jul 20 00:06:00.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:06:01.085: INFO: namespace configmap-179 deletion completed in 6.099060959s • [SLOW TEST:6.299 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:06:01.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jul 20 00:06:01.149: INFO: Waiting up to 5m0s for pod "downward-api-aee8bede-4ce3-470a-8bf8-10c171c0d5d0" in namespace "downward-api-9316" to be "success or failure" Jul 20 00:06:01.161: INFO: Pod "downward-api-aee8bede-4ce3-470a-8bf8-10c171c0d5d0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.38818ms Jul 20 00:06:03.280: INFO: Pod "downward-api-aee8bede-4ce3-470a-8bf8-10c171c0d5d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130625162s Jul 20 00:06:05.285: INFO: Pod "downward-api-aee8bede-4ce3-470a-8bf8-10c171c0d5d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135165459s STEP: Saw pod success Jul 20 00:06:05.285: INFO: Pod "downward-api-aee8bede-4ce3-470a-8bf8-10c171c0d5d0" satisfied condition "success or failure" Jul 20 00:06:05.288: INFO: Trying to get logs from node iruya-worker pod downward-api-aee8bede-4ce3-470a-8bf8-10c171c0d5d0 container dapi-container: STEP: delete the pod Jul 20 00:06:05.315: INFO: Waiting for pod downward-api-aee8bede-4ce3-470a-8bf8-10c171c0d5d0 to disappear Jul 20 00:06:05.382: INFO: Pod downward-api-aee8bede-4ce3-470a-8bf8-10c171c0d5d0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:06:05.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9316" for this suite. Jul 20 00:06:11.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:06:11.482: INFO: namespace downward-api-9316 deletion completed in 6.095601529s • [SLOW TEST:10.395 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:06:11.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-3721cfa8-b487-4b47-9ea4-77b37c501223 in namespace container-probe-7537 Jul 20 00:06:15.581: INFO: Started pod busybox-3721cfa8-b487-4b47-9ea4-77b37c501223 in namespace container-probe-7537 STEP: checking the pod's current state and verifying that restartCount is present Jul 20 00:06:15.583: INFO: Initial restart count of pod busybox-3721cfa8-b487-4b47-9ea4-77b37c501223 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:10:16.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7537" for this suite. Jul 20 00:10:22.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:10:22.410: INFO: namespace container-probe-7537 deletion completed in 6.092627609s • [SLOW TEST:250.928 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:10:22.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:10:52.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2028" for this suite. Jul 20 00:10:58.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:10:59.062: INFO: namespace container-runtime-2028 deletion completed in 6.105645864s • [SLOW TEST:36.652 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:10:59.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 20 00:10:59.164: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f05478b-3470-4f9b-8798-5c632c58bc6e" in namespace "downward-api-6613" to be "success or failure" Jul 20 00:10:59.188: INFO: Pod "downwardapi-volume-2f05478b-3470-4f9b-8798-5c632c58bc6e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.398522ms Jul 20 00:11:01.193: INFO: Pod "downwardapi-volume-2f05478b-3470-4f9b-8798-5c632c58bc6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028907075s Jul 20 00:11:03.197: INFO: Pod "downwardapi-volume-2f05478b-3470-4f9b-8798-5c632c58bc6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033386286s STEP: Saw pod success Jul 20 00:11:03.197: INFO: Pod "downwardapi-volume-2f05478b-3470-4f9b-8798-5c632c58bc6e" satisfied condition "success or failure" Jul 20 00:11:03.200: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2f05478b-3470-4f9b-8798-5c632c58bc6e container client-container: STEP: delete the pod Jul 20 00:11:03.239: INFO: Waiting for pod downwardapi-volume-2f05478b-3470-4f9b-8798-5c632c58bc6e to disappear Jul 20 00:11:03.245: INFO: Pod downwardapi-volume-2f05478b-3470-4f9b-8798-5c632c58bc6e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:11:03.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6613" for this suite. Jul 20 00:11:09.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:11:09.355: INFO: namespace downward-api-6613 deletion completed in 6.107681626s • [SLOW TEST:10.293 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:11:09.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 20 00:11:09.415: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b7c1654-78ee-4373-a37a-7bac9bf2a7d3" in namespace "downward-api-9663" to be "success or failure" Jul 20 00:11:09.488: INFO: Pod "downwardapi-volume-9b7c1654-78ee-4373-a37a-7bac9bf2a7d3": Phase="Pending", Reason="", readiness=false. Elapsed: 72.885165ms Jul 20 00:11:11.512: INFO: Pod "downwardapi-volume-9b7c1654-78ee-4373-a37a-7bac9bf2a7d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096957545s Jul 20 00:11:13.517: INFO: Pod "downwardapi-volume-9b7c1654-78ee-4373-a37a-7bac9bf2a7d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101494808s STEP: Saw pod success Jul 20 00:11:13.517: INFO: Pod "downwardapi-volume-9b7c1654-78ee-4373-a37a-7bac9bf2a7d3" satisfied condition "success or failure" Jul 20 00:11:13.520: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9b7c1654-78ee-4373-a37a-7bac9bf2a7d3 container client-container: STEP: delete the pod Jul 20 00:11:13.667: INFO: Waiting for pod downwardapi-volume-9b7c1654-78ee-4373-a37a-7bac9bf2a7d3 to disappear Jul 20 00:11:13.714: INFO: Pod downwardapi-volume-9b7c1654-78ee-4373-a37a-7bac9bf2a7d3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:11:13.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9663" for this suite. Jul 20 00:11:19.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:11:19.894: INFO: namespace downward-api-9663 deletion completed in 6.176318115s • [SLOW TEST:10.539 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:11:19.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 20 00:11:20.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1729745d-6058-4113-8389-185640a1ebe8" in namespace "downward-api-6706" to be "success or failure" Jul 20 00:11:20.129: INFO: Pod "downwardapi-volume-1729745d-6058-4113-8389-185640a1ebe8": Phase="Pending", Reason="", readiness=false. Elapsed: 107.328988ms Jul 20 00:11:22.133: INFO: Pod "downwardapi-volume-1729745d-6058-4113-8389-185640a1ebe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111359914s Jul 20 00:11:24.138: INFO: Pod "downwardapi-volume-1729745d-6058-4113-8389-185640a1ebe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115683188s STEP: Saw pod success Jul 20 00:11:24.138: INFO: Pod "downwardapi-volume-1729745d-6058-4113-8389-185640a1ebe8" satisfied condition "success or failure" Jul 20 00:11:24.141: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1729745d-6058-4113-8389-185640a1ebe8 container client-container: STEP: delete the pod Jul 20 00:11:24.189: INFO: Waiting for pod downwardapi-volume-1729745d-6058-4113-8389-185640a1ebe8 to disappear Jul 20 00:11:24.199: INFO: Pod downwardapi-volume-1729745d-6058-4113-8389-185640a1ebe8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:11:24.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6706" for this suite. Jul 20 00:11:30.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:11:30.287: INFO: namespace downward-api-6706 deletion completed in 6.085473177s • [SLOW TEST:10.392 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:11:30.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jul 20 00:11:35.075: INFO: Successfully updated pod "annotationupdatef6651496-436a-49bd-921f-583b45498130" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:11:39.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6213" for this suite. Jul 20 00:12:01.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:12:01.224: INFO: namespace downward-api-6213 deletion completed in 22.104128373s • [SLOW TEST:30.936 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:12:01.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jul 20 00:12:01.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jul 20 00:12:01.467: INFO: stderr: "" Jul 20 00:12:01.467: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:12:01.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2903" for this suite. Jul 20 00:12:07.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:12:07.571: INFO: namespace kubectl-2903 deletion completed in 6.099744966s • [SLOW TEST:6.347 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:12:07.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3312 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jul 20 00:12:07.678: INFO: Found 0 stateful pods, waiting for 3 Jul 20 00:12:17.684: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 00:12:17.684: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 00:12:17.684: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 20 00:12:27.683: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 00:12:27.683: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 00:12:27.683: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jul 20 00:12:27.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3312 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 20 00:12:30.432: INFO: stderr: "I0720 00:12:30.314902 1306 log.go:172] (0xc00013edc0) (0xc0005f2960) Create stream\nI0720 00:12:30.314942 1306 log.go:172] (0xc00013edc0) (0xc0005f2960) Stream added, broadcasting: 1\nI0720 00:12:30.316900 1306 log.go:172] (0xc00013edc0) Reply frame received for 1\nI0720 00:12:30.316930 1306 log.go:172] (0xc00013edc0) (0xc000786000) Create stream\nI0720 00:12:30.316938 1306 log.go:172] (0xc00013edc0) (0xc000786000) Stream added, broadcasting: 3\nI0720 00:12:30.317797 1306 log.go:172] (0xc00013edc0) Reply frame received for 3\nI0720 00:12:30.317831 1306 log.go:172] (0xc00013edc0) (0xc00078c000) Create stream\nI0720 00:12:30.317841 1306 log.go:172] (0xc00013edc0) (0xc00078c000) Stream added, broadcasting: 5\nI0720 00:12:30.318652 1306 log.go:172] (0xc00013edc0) Reply frame received for 5\nI0720 00:12:30.386001 1306 log.go:172] (0xc00013edc0) Data frame received for 5\nI0720 00:12:30.386046 1306 log.go:172] (0xc00078c000) (5) Data frame handling\nI0720 00:12:30.386076 1306 log.go:172] (0xc00078c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0720 00:12:30.422738 1306 log.go:172] (0xc00013edc0) Data frame received for 3\nI0720 00:12:30.422871 1306 log.go:172] (0xc000786000) (3) Data frame handling\nI0720 00:12:30.422905 1306 log.go:172] (0xc000786000) (3) Data frame sent\nI0720 00:12:30.423038 1306 log.go:172] (0xc00013edc0) Data frame received for 5\nI0720 00:12:30.423064 1306 log.go:172] (0xc00078c000) (5) Data frame handling\nI0720 00:12:30.423097 1306 log.go:172] (0xc00013edc0) Data frame received for 3\nI0720 00:12:30.423112 1306 log.go:172] (0xc000786000) (3) Data frame handling\nI0720 00:12:30.425613 1306 log.go:172] (0xc00013edc0) Data frame received for 1\nI0720 00:12:30.425642 1306 log.go:172] (0xc0005f2960) (1) Data frame handling\nI0720 00:12:30.425668 1306 log.go:172] (0xc0005f2960) (1) Data frame sent\nI0720 00:12:30.425688 1306 log.go:172] (0xc00013edc0) (0xc0005f2960) Stream removed, broadcasting: 1\nI0720 00:12:30.425728 1306 log.go:172] (0xc00013edc0) Go away received\nI0720 00:12:30.426261 1306 log.go:172] (0xc00013edc0) (0xc0005f2960) Stream removed, broadcasting: 1\nI0720 00:12:30.426286 1306 log.go:172] (0xc00013edc0) (0xc000786000) Stream removed, broadcasting: 3\nI0720 00:12:30.426297 1306 log.go:172] (0xc00013edc0) (0xc00078c000) Stream removed, broadcasting: 5\n" Jul 20 00:12:30.432: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 20 00:12:30.432: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 20 00:12:40.466: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jul 20 00:12:50.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3312 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 20 00:12:50.720: INFO: stderr: "I0720 00:12:50.618151 1340 log.go:172] (0xc000116dc0) (0xc0005266e0) Create stream\nI0720 00:12:50.618226 1340 log.go:172] (0xc000116dc0) (0xc0005266e0) Stream added, broadcasting: 1\nI0720 00:12:50.621755 1340 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0720 00:12:50.621826 1340 log.go:172] (0xc000116dc0) (0xc00055e1e0) Create stream\nI0720 00:12:50.621846 1340 log.go:172] (0xc000116dc0) (0xc00055e1e0) Stream added, broadcasting: 3\nI0720 00:12:50.622855 1340 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0720 00:12:50.622906 1340 log.go:172] (0xc000116dc0) (0xc000526000) Create stream\nI0720 00:12:50.622938 1340 log.go:172] (0xc000116dc0) (0xc000526000) Stream added, broadcasting: 5\nI0720 00:12:50.623944 1340 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0720 00:12:50.713248 1340 log.go:172] (0xc000116dc0) Data frame received for 3\nI0720 00:12:50.713287 1340 log.go:172] (0xc00055e1e0) (3) Data frame handling\nI0720 00:12:50.713298 1340 log.go:172] (0xc00055e1e0) (3) Data frame sent\nI0720 00:12:50.713306 1340 log.go:172] (0xc000116dc0) Data frame received for 3\nI0720 00:12:50.713313 1340 log.go:172] (0xc00055e1e0) (3) Data frame handling\nI0720 00:12:50.713325 1340 log.go:172] (0xc000116dc0) Data frame received for 5\nI0720 00:12:50.713332 1340 log.go:172] (0xc000526000) (5) Data frame handling\nI0720 00:12:50.713341 1340 log.go:172] (0xc000526000) (5) Data frame sent\nI0720 00:12:50.713348 1340 log.go:172] (0xc000116dc0) Data frame received for 5\nI0720 00:12:50.713354 1340 log.go:172] (0xc000526000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0720 00:12:50.714701 1340 log.go:172] (0xc000116dc0) Data frame received for 1\nI0720 00:12:50.714741 1340 log.go:172] (0xc0005266e0) (1) Data frame handling\nI0720 00:12:50.714762 1340 log.go:172] (0xc0005266e0) (1) Data frame sent\nI0720 00:12:50.714785 1340 log.go:172] (0xc000116dc0) (0xc0005266e0) Stream removed, broadcasting: 1\nI0720 00:12:50.714826 1340 log.go:172] (0xc000116dc0) Go away received\nI0720 00:12:50.715276 1340 log.go:172] (0xc000116dc0) (0xc0005266e0) Stream removed, broadcasting: 1\nI0720 00:12:50.715302 1340 log.go:172] (0xc000116dc0) (0xc00055e1e0) Stream removed, broadcasting: 3\nI0720 00:12:50.715314 1340 log.go:172] (0xc000116dc0) (0xc000526000) Stream removed, broadcasting: 5\n" Jul 20 00:12:50.721: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 20 00:12:50.721: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 20 00:13:00.741: INFO: Waiting for StatefulSet statefulset-3312/ss2 to complete update Jul 20 00:13:00.741: INFO: Waiting for Pod statefulset-3312/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 20 00:13:00.741: INFO: Waiting for Pod statefulset-3312/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 20 00:13:00.741: INFO: Waiting for Pod statefulset-3312/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 20 00:13:10.749: INFO: Waiting for StatefulSet statefulset-3312/ss2 to complete update Jul 20 00:13:10.749: INFO: Waiting for Pod statefulset-3312/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 20 00:13:10.749: INFO: Waiting for Pod statefulset-3312/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 20 00:13:21.027: INFO: Waiting for StatefulSet statefulset-3312/ss2 to complete update Jul 20 00:13:21.027: INFO: Waiting for Pod statefulset-3312/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Jul 20 00:13:30.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3312 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 20 00:13:31.651: INFO: stderr: "I0720 00:13:31.349332 1363 log.go:172] (0xc00013ae70) (0xc0003b66e0) Create stream\nI0720 00:13:31.349400 1363 log.go:172] (0xc00013ae70) (0xc0003b66e0) Stream added, broadcasting: 1\nI0720 00:13:31.351862 1363 log.go:172] (0xc00013ae70) Reply frame received for 1\nI0720 00:13:31.352274 1363 log.go:172] (0xc00013ae70) (0xc000910000) Create stream\nI0720 00:13:31.352297 1363 log.go:172] (0xc00013ae70) (0xc000910000) Stream added, broadcasting: 3\nI0720 00:13:31.354004 1363 log.go:172] (0xc00013ae70) Reply frame received for 3\nI0720 00:13:31.354046 1363 log.go:172] (0xc00013ae70) (0xc0002f0000) Create stream\nI0720 00:13:31.354056 1363 log.go:172] (0xc00013ae70) (0xc0002f0000) Stream added, broadcasting: 5\nI0720 00:13:31.355163 1363 log.go:172] (0xc00013ae70) Reply frame received for 5\nI0720 00:13:31.425068 1363 log.go:172] (0xc00013ae70) Data frame received for 5\nI0720 00:13:31.425106 1363 log.go:172] (0xc0002f0000) (5) Data frame handling\nI0720 00:13:31.425135 1363 log.go:172] (0xc0002f0000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0720 00:13:31.644012 1363 log.go:172] (0xc00013ae70) Data frame received for 3\nI0720 00:13:31.644041 1363 log.go:172] (0xc000910000) (3) Data frame handling\nI0720 00:13:31.644064 1363 log.go:172] (0xc000910000) (3) Data frame sent\nI0720 00:13:31.644269 1363 log.go:172] (0xc00013ae70) Data frame received for 5\nI0720 00:13:31.644293 1363 log.go:172] (0xc0002f0000) (5) Data frame handling\nI0720 00:13:31.644325 1363 log.go:172] (0xc00013ae70) Data frame received for 3\nI0720 00:13:31.644343 1363 log.go:172] (0xc000910000) (3) Data frame handling\nI0720 00:13:31.645948 1363 log.go:172] (0xc00013ae70) Data frame received for 1\nI0720 00:13:31.645975 1363 log.go:172] (0xc0003b66e0) (1) Data frame handling\nI0720 00:13:31.645991 1363 log.go:172] (0xc0003b66e0) (1) Data frame sent\nI0720 00:13:31.646011 1363 log.go:172] (0xc00013ae70) (0xc0003b66e0) Stream removed, broadcasting: 1\nI0720 00:13:31.646037 1363 log.go:172] (0xc00013ae70) Go away received\nI0720 00:13:31.646402 1363 log.go:172] (0xc00013ae70) (0xc0003b66e0) Stream removed, broadcasting: 1\nI0720 00:13:31.646423 1363 log.go:172] (0xc00013ae70) (0xc000910000) Stream removed, broadcasting: 3\nI0720 00:13:31.646432 1363 log.go:172] (0xc00013ae70) (0xc0002f0000) Stream removed, broadcasting: 5\n" Jul 20 00:13:31.651: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 20 00:13:31.651: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 20 00:13:41.682: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jul 20 00:13:51.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3312 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 20 00:13:51.976: INFO: stderr: "I0720 00:13:51.886304 1383 log.go:172] (0xc00013adc0) (0xc00063a820) Create stream\nI0720 00:13:51.886355 1383 log.go:172] (0xc00013adc0) (0xc00063a820) Stream added, broadcasting: 1\nI0720 00:13:51.889101 1383 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0720 00:13:51.889139 1383 log.go:172] (0xc00013adc0) (0xc0006361e0) Create stream\nI0720 00:13:51.889148 1383 log.go:172] (0xc00013adc0) (0xc0006361e0) Stream added, broadcasting: 3\nI0720 00:13:51.890012 1383 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0720 00:13:51.890090 1383 log.go:172] (0xc00013adc0) (0xc00063a000) Create stream\nI0720 00:13:51.890121 1383 log.go:172] (0xc00013adc0) (0xc00063a000) Stream added, broadcasting: 5\nI0720 00:13:51.890952 1383 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0720 00:13:51.969328 1383 log.go:172] (0xc00013adc0) Data frame received for 3\nI0720 00:13:51.969382 1383 log.go:172] (0xc0006361e0) (3) Data frame handling\nI0720 00:13:51.969398 1383 log.go:172] (0xc0006361e0) (3) Data frame sent\nI0720 00:13:51.969416 1383 log.go:172] (0xc00013adc0) Data frame received for 3\nI0720 00:13:51.969432 1383 log.go:172] (0xc0006361e0) (3) Data frame handling\nI0720 00:13:51.969479 1383 log.go:172] (0xc00013adc0) Data frame received for 5\nI0720 00:13:51.969512 1383 log.go:172] (0xc00063a000) (5) Data frame handling\nI0720 00:13:51.969534 1383 log.go:172] (0xc00063a000) (5) Data frame sent\nI0720 00:13:51.969546 1383 log.go:172] (0xc00013adc0) Data frame received for 5\nI0720 00:13:51.969557 1383 log.go:172] (0xc00063a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0720 00:13:51.971123 1383 log.go:172] (0xc00013adc0) Data frame received for 1\nI0720 00:13:51.971154 1383 log.go:172] (0xc00063a820) (1) Data frame handling\nI0720 00:13:51.971167 1383 log.go:172] (0xc00063a820) (1) Data frame sent\nI0720 00:13:51.971186 1383 log.go:172] (0xc00013adc0) (0xc00063a820) Stream removed, broadcasting: 1\nI0720 00:13:51.971461 1383 log.go:172] (0xc00013adc0) Go away received\nI0720 00:13:51.971646 1383 log.go:172] (0xc00013adc0) (0xc00063a820) Stream removed, broadcasting: 1\nI0720 00:13:51.971687 1383 log.go:172] (0xc00013adc0) (0xc0006361e0) Stream removed, broadcasting: 3\nI0720 00:13:51.971712 1383 log.go:172] (0xc00013adc0) (0xc00063a000) Stream removed, broadcasting: 5\n" Jul 20 00:13:51.977: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 20 00:13:51.977: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 20 00:14:01.997: INFO: Waiting for StatefulSet statefulset-3312/ss2 to complete update Jul 20 00:14:01.997: INFO: Waiting for Pod statefulset-3312/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 20 00:14:01.997: INFO: Waiting for Pod statefulset-3312/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 20 00:14:01.998: INFO: Waiting for Pod statefulset-3312/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 20 00:14:12.004: INFO: Waiting for StatefulSet statefulset-3312/ss2 to complete update Jul 20 00:14:12.004: INFO: Waiting for Pod statefulset-3312/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 20 00:14:12.004: INFO: Waiting for Pod statefulset-3312/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 20 00:14:22.006: INFO: Waiting for StatefulSet statefulset-3312/ss2 to complete update Jul 20 00:14:22.006: INFO: Waiting for Pod statefulset-3312/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 20 00:14:22.006: INFO: Waiting for Pod statefulset-3312/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 20 00:14:32.006: INFO: Waiting for StatefulSet statefulset-3312/ss2 to complete update Jul 20 00:14:32.006: INFO: Waiting for Pod statefulset-3312/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jul 20 00:14:42.034: INFO: Deleting all statefulset in ns statefulset-3312 Jul 20 00:14:42.036: INFO: Scaling statefulset ss2 to 0 Jul 20 00:15:22.069: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 00:15:22.072: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:15:22.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3312" for this suite. Jul 20 00:15:30.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:15:30.201: INFO: namespace statefulset-3312 deletion completed in 8.101189156s • [SLOW TEST:202.629 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:15:30.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 20 00:15:30.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ddffcb75-eb83-45ae-bdd1-beb0b99b9a37" in namespace "projected-4326" to be "success or failure" Jul 20 00:15:30.292: INFO: Pod "downwardapi-volume-ddffcb75-eb83-45ae-bdd1-beb0b99b9a37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.792592ms Jul 20 00:15:32.295: INFO: Pod "downwardapi-volume-ddffcb75-eb83-45ae-bdd1-beb0b99b9a37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006230848s Jul 20 00:15:34.300: INFO: Pod "downwardapi-volume-ddffcb75-eb83-45ae-bdd1-beb0b99b9a37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010506712s STEP: Saw pod success Jul 20 00:15:34.300: INFO: Pod "downwardapi-volume-ddffcb75-eb83-45ae-bdd1-beb0b99b9a37" satisfied condition "success or failure" Jul 20 00:15:34.303: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ddffcb75-eb83-45ae-bdd1-beb0b99b9a37 container client-container: STEP: delete the pod Jul 20 00:15:34.343: INFO: Waiting for pod downwardapi-volume-ddffcb75-eb83-45ae-bdd1-beb0b99b9a37 to disappear Jul 20 00:15:34.358: INFO: Pod downwardapi-volume-ddffcb75-eb83-45ae-bdd1-beb0b99b9a37 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:15:34.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4326" for this suite. Jul 20 00:15:40.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:15:40.502: INFO: namespace projected-4326 deletion completed in 6.141454275s • [SLOW TEST:10.301 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:15:40.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 20 00:15:40.549: INFO: Creating deployment "nginx-deployment" Jul 20 00:15:40.575: INFO: Waiting for observed generation 1 Jul 20 00:15:42.593: INFO: Waiting for all required pods to come up Jul 20 00:15:42.597: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jul 20 00:15:54.605: INFO: Waiting for deployment "nginx-deployment" to complete Jul 20 00:15:54.611: INFO: Updating deployment "nginx-deployment" with a non-existent image Jul 20 00:15:54.618: INFO: Updating deployment nginx-deployment Jul 20 00:15:54.618: INFO: Waiting for observed generation 2 Jul 20 00:15:56.645: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jul 20 00:15:56.648: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jul 20 00:15:56.650: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jul 20 00:15:56.656: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jul 20 00:15:56.656: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jul 20 00:15:56.658: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jul 20 00:15:56.662: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jul 20 00:15:56.662: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jul 20 00:15:56.668: INFO: Updating deployment nginx-deployment Jul 20 00:15:56.668: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jul 20 00:15:56.850: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jul 20 00:15:57.660: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jul 20 00:15:59.932: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2068,SelfLink:/apis/apps/v1/namespaces/deployment-2068/deployments/nginx-deployment,UID:76250e49-7a81-48f9-9fe8-7467b66b2fae,ResourceVersion:42820,Generation:3,CreationTimestamp:2020-07-20 00:15:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-07-20 00:15:56 +0000 UTC 2020-07-20 00:15:56 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-20 00:15:58 +0000 UTC 2020-07-20 00:15:40 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jul 20 00:16:00.182: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2068,SelfLink:/apis/apps/v1/namespaces/deployment-2068/replicasets/nginx-deployment-55fb7cb77f,UID:fafe59c5-2413-495d-ba5b-7d8043caefa9,ResourceVersion:42817,Generation:3,CreationTimestamp:2020-07-20 00:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 76250e49-7a81-48f9-9fe8-7467b66b2fae 0xc002ebf9e7 0xc002ebf9e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 20 00:16:00.182: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jul 20 00:16:00.182: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2068,SelfLink:/apis/apps/v1/namespaces/deployment-2068/replicasets/nginx-deployment-7b8c6f4498,UID:dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba,ResourceVersion:42797,Generation:3,CreationTimestamp:2020-07-20 00:15:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 76250e49-7a81-48f9-9fe8-7467b66b2fae 0xc002ebfab7 0xc002ebfab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jul 20 00:16:00.442: INFO: Pod "nginx-deployment-55fb7cb77f-26wll" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-26wll,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-26wll,UID:b86640d6-65a0-435d-a6f8-bd7c62869e3d,ResourceVersion:42851,Generation:0,CreationTimestamp:2020-07-20 00:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc0016bd2e7 0xc0016bd2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016bd400} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016bd420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:15:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.443: INFO: Pod "nginx-deployment-55fb7cb77f-2cbqc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2cbqc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-2cbqc,UID:298d00e9-2517-43e9-ac45-6c471690d5b4,ResourceVersion:42865,Generation:0,CreationTimestamp:2020-07-20 00:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc0016bd4f0 0xc0016bd4f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016bd570} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016bd590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.55,StartTime:2020-07-20 00:15:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.443: INFO: Pod "nginx-deployment-55fb7cb77f-5tr2b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5tr2b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-5tr2b,UID:dc41b72b-1210-417f-9a9f-15771547f819,ResourceVersion:42870,Generation:0,CreationTimestamp:2020-07-20 00:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc0016bd680 0xc0016bd681}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016bd700} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016bd720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.443: INFO: Pod "nginx-deployment-55fb7cb77f-74q72" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-74q72,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-74q72,UID:d5b4fffc-6de5-473d-ab94-0e0ebfeac893,ResourceVersion:42735,Generation:0,CreationTimestamp:2020-07-20 00:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc0016bd7f0 0xc0016bd7f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016bd870} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016bd890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:15:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.443: INFO: Pod "nginx-deployment-55fb7cb77f-7j2lp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7j2lp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-7j2lp,UID:8dc152de-357b-4abe-ad11-2fc0736d7c58,ResourceVersion:42860,Generation:0,CreationTimestamp:2020-07-20 00:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc0016bd960 0xc0016bd961}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016bd9e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016bda00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.443: INFO: Pod "nginx-deployment-55fb7cb77f-8wqlt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8wqlt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-8wqlt,UID:352b0658-a778-4953-bee3-6f615f8299cb,ResourceVersion:42846,Generation:0,CreationTimestamp:2020-07-20 00:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc0016bdad0 0xc0016bdad1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016bdb50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016bdb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:15:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.443: INFO: Pod "nginx-deployment-55fb7cb77f-9w6g9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9w6g9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-9w6g9,UID:c7c7ab3c-e56c-49da-9fcd-f85b3d910899,ResourceVersion:42807,Generation:0,CreationTimestamp:2020-07-20 00:15:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc0016bdd70 0xc0016bdd71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016bddf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016bde30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:56 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.444: INFO: Pod "nginx-deployment-55fb7cb77f-c8bbs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c8bbs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-c8bbs,UID:547298a5-d2eb-44e6-a65a-e2caa6998ddd,ResourceVersion:42713,Generation:0,CreationTimestamp:2020-07-20 00:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc0016bdfb0 0xc0016bdfb1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da4030} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da4050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.444: INFO: Pod "nginx-deployment-55fb7cb77f-jbmv2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jbmv2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-jbmv2,UID:2afb7bb7-ed40-4e19-aaf1-58f3948bc324,ResourceVersion:42812,Generation:0,CreationTimestamp:2020-07-20 00:15:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc002da4120 0xc002da4121}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da41a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da41c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.444: INFO: Pod "nginx-deployment-55fb7cb77f-k9hjc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k9hjc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-k9hjc,UID:63d36fb5-897b-4aa8-84fe-1c344254647f,ResourceVersion:42833,Generation:0,CreationTimestamp:2020-07-20 00:15:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc002da42d0 0xc002da42d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da4380} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da4470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.444: INFO: Pod "nginx-deployment-55fb7cb77f-rt2pw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rt2pw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-rt2pw,UID:12f97b8f-5fb9-4d9e-88bf-9a409009ba25,ResourceVersion:42861,Generation:0,CreationTimestamp:2020-07-20 00:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc002da4550 0xc002da4551}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da45d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da45f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:15:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.445: INFO: Pod "nginx-deployment-55fb7cb77f-xvlm7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xvlm7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-xvlm7,UID:596af13d-c27e-408a-adaa-1d3e31f0bcbf,ResourceVersion:42718,Generation:0,CreationTimestamp:2020-07-20 00:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc002da46c0 0xc002da46c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da4740} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da4770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:15:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.445: INFO: Pod "nginx-deployment-55fb7cb77f-zcc5b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zcc5b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-55fb7cb77f-zcc5b,UID:9fc5ae4b-f57e-4d38-8d44-8f75d0f0cf5a,ResourceVersion:42734,Generation:0,CreationTimestamp:2020-07-20 00:15:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f fafe59c5-2413-495d-ba5b-7d8043caefa9 0xc002da4840 0xc002da4841}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da48d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da48f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.445: INFO: Pod "nginx-deployment-7b8c6f4498-2xh57" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2xh57,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-2xh57,UID:67063160-57d1-4c30-a1db-8ab946e1ff49,ResourceVersion:42847,Generation:0,CreationTimestamp:2020-07-20 00:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da49c0 0xc002da49c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da4a40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da4a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.445: INFO: Pod "nginx-deployment-7b8c6f4498-4fjb2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4fjb2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-4fjb2,UID:d510af4a-fb44-454d-8240-c23724663ff0,ResourceVersion:42650,Generation:0,CreationTimestamp:2020-07-20 00:15:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da4b20 0xc002da4b21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da4ba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da4bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.33,StartTime:2020-07-20 00:15:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-20 00:15:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://77fc61b43b72791fb3fce746aa1fa83399e2a5f0be3c3a25dc4b99ffcf993072}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.445: INFO: Pod "nginx-deployment-7b8c6f4498-6p86t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6p86t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-6p86t,UID:1a1998e0-b288-4f5d-9fa9-37613d37aa4e,ResourceVersion:42838,Generation:0,CreationTimestamp:2020-07-20 00:15:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da4c90 0xc002da4c91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da4d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da4d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.445: INFO: Pod "nginx-deployment-7b8c6f4498-8bwts" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8bwts,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-8bwts,UID:2e6e19a2-1e71-4453-a540-c726c379dc87,ResourceVersion:42849,Generation:0,CreationTimestamp:2020-07-20 00:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da4de0 0xc002da4de1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da4e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da4e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.446: INFO: Pod "nginx-deployment-7b8c6f4498-8p9dk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8p9dk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-8p9dk,UID:822f9aeb-cad0-4964-bf6e-9578865f14c9,ResourceVersion:42638,Generation:0,CreationTimestamp:2020-07-20 00:15:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da4f30 0xc002da4f31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da4fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da4fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.51,StartTime:2020-07-20 00:15:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-20 00:15:48 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://57a35acf0b2b5ae691aef7292a9d6070eb3a0271e953070f0a07edf456862fdb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.446: INFO: Pod "nginx-deployment-7b8c6f4498-brr8x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-brr8x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-brr8x,UID:be35d33f-61c3-4bf8-a349-3d4757f746c4,ResourceVersion:42625,Generation:0,CreationTimestamp:2020-07-20 00:15:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da50a0 0xc002da50a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da5110} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da5130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.32,StartTime:2020-07-20 00:15:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-20 00:15:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://da0dbb9a41caab76ecb73e7850a3b1c573b5ff61bd98b8a28392acf9f880e14c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.446: INFO: Pod "nginx-deployment-7b8c6f4498-cdkln" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cdkln,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-cdkln,UID:a7c3c8aa-6031-48d6-a2a4-bd2b92f8f6a9,ResourceVersion:42667,Generation:0,CreationTimestamp:2020-07-20 00:15:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da5200 0xc002da5201}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da5270} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da5290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.53,StartTime:2020-07-20 00:15:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-20 00:15:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://67ab2afe1aa00fdb4f0621f1d40d955f851d90a9259bb96c97e17afc541a8719}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.446: INFO: Pod "nginx-deployment-7b8c6f4498-f5kwh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f5kwh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-f5kwh,UID:2aca1af7-106a-4e38-9c4f-2539e4372c43,ResourceVersion:42642,Generation:0,CreationTimestamp:2020-07-20 00:15:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da5360 0xc002da5361}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da53e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da5400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.50,StartTime:2020-07-20 00:15:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-20 00:15:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://791cce8615784b84eb1c0ff879fe83af00fd9062b2310b9c8b759ba249a3fafe}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.446: INFO: Pod "nginx-deployment-7b8c6f4498-kcd5c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kcd5c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-kcd5c,UID:faeacc34-8c8a-455d-b609-1d8edd09b980,ResourceVersion:42825,Generation:0,CreationTimestamp:2020-07-20 00:15:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da54e0 0xc002da54e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da5550} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da5570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.446: INFO: Pod "nginx-deployment-7b8c6f4498-kpw5c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kpw5c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-kpw5c,UID:9f5d999b-532d-4784-9315-9d006298321a,ResourceVersion:42663,Generation:0,CreationTimestamp:2020-07-20 00:15:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da5630 0xc002da5631}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da56a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da56c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.34,StartTime:2020-07-20 00:15:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-20 00:15:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://91f48cf9e25ce84d6e2b7ea052fa975bb096dc6aebc927cfbc82abe5df2fd58c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.447: INFO: Pod "nginx-deployment-7b8c6f4498-lmgqw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lmgqw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-lmgqw,UID:0d22c4a7-4b45-4639-bb0c-283b29d085f2,ResourceVersion:42677,Generation:0,CreationTimestamp:2020-07-20 00:15:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da5790 0xc002da5791}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da5800} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da5820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.54,StartTime:2020-07-20 00:15:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-20 00:15:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7e47291ce280209518a66942667112a164a4e29c5106da09d13635fc2501de8f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.447: INFO: Pod "nginx-deployment-7b8c6f4498-mdxc9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mdxc9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-mdxc9,UID:07003821-ae8d-4369-8575-d44978bfe11d,ResourceVersion:42835,Generation:0,CreationTimestamp:2020-07-20 00:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da58f0 0xc002da58f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da5960} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da5980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.447: INFO: Pod "nginx-deployment-7b8c6f4498-mwrrv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mwrrv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-mwrrv,UID:8a989bdd-c22c-4ea3-8723-923f6d758bf2,ResourceVersion:42830,Generation:0,CreationTimestamp:2020-07-20 00:15:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da5a40 0xc002da5a41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da5ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da5ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.447: INFO: Pod "nginx-deployment-7b8c6f4498-qpfhv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qpfhv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-qpfhv,UID:cda65a38-7b6d-4d22-9388-6226d26a8a93,ResourceVersion:42798,Generation:0,CreationTimestamp:2020-07-20 00:15:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da5b90 0xc002da5b91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da5c00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da5c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:56 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.447: INFO: Pod "nginx-deployment-7b8c6f4498-sp5nx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sp5nx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-sp5nx,UID:6d91d872-20e5-4d38-8e55-46db24457975,ResourceVersion:42840,Generation:0,CreationTimestamp:2020-07-20 00:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da5ce0 0xc002da5ce1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da5d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da5d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.447: INFO: Pod "nginx-deployment-7b8c6f4498-vgdhq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vgdhq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-vgdhq,UID:a4a277bd-5b0a-4b59-9e4b-ba4df8c195ef,ResourceVersion:42864,Generation:0,CreationTimestamp:2020-07-20 00:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da5e30 0xc002da5e31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da5ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002da5ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.448: INFO: Pod "nginx-deployment-7b8c6f4498-wd4pg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wd4pg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-wd4pg,UID:65b84847-a39e-4f1a-b650-b5d19bc8631d,ResourceVersion:42662,Generation:0,CreationTimestamp:2020-07-20 00:15:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002da5f80 0xc002da5f81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002da5ff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a9a010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:40 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.52,StartTime:2020-07-20 00:15:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-20 00:15:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7366cd1e770457be9ac0bce169f48530a27e5f8f79372c807d81bc2d6fb4140a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.448: INFO: Pod "nginx-deployment-7b8c6f4498-wkkwz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wkkwz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-wkkwz,UID:807b3ce6-fd6a-48ca-ac61-e3b1bddb1d23,ResourceVersion:42822,Generation:0,CreationTimestamp:2020-07-20 00:15:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002a9a0f0 0xc002a9a0f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a9a160} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a9a180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.448: INFO: Pod "nginx-deployment-7b8c6f4498-z56hn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z56hn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-z56hn,UID:efc158ba-f18f-4b1a-80e2-8854209c0393,ResourceVersion:42818,Generation:0,CreationTimestamp:2020-07-20 00:15:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002a9a240 0xc002a9a241}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a9a2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a9a2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:56 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:00.448: INFO: Pod "nginx-deployment-7b8c6f4498-zrc8g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zrc8g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/nginx-deployment-7b8c6f4498-zrc8g,UID:3daeb27f-a688-46c7-968d-0399d773255a,ResourceVersion:42805,Generation:0,CreationTimestamp:2020-07-20 00:15:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dfb9a66a-0695-4f4f-8a02-9d3b66ef82ba 0xc002a9a390 0xc002a9a391}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s9j45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s9j45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s9j45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a9a400} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a9a420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:15:56 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:15:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:16:00.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2068" for this suite. Jul 20 00:16:26.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:16:26.747: INFO: namespace deployment-2068 deletion completed in 26.129207477s • [SLOW TEST:46.245 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:16:26.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 20 00:16:26.884: INFO: Waiting up to 5m0s for pod "pod-61836724-b013-4ea2-aba0-544a83252536" in namespace "emptydir-3537" to be "success or failure" Jul 20 00:16:26.887: INFO: Pod "pod-61836724-b013-4ea2-aba0-544a83252536": Phase="Pending", Reason="", readiness=false. Elapsed: 3.411099ms Jul 20 00:16:29.007: INFO: Pod "pod-61836724-b013-4ea2-aba0-544a83252536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123710884s Jul 20 00:16:31.011: INFO: Pod "pod-61836724-b013-4ea2-aba0-544a83252536": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127790456s Jul 20 00:16:33.016: INFO: Pod "pod-61836724-b013-4ea2-aba0-544a83252536": Phase="Running", Reason="", readiness=true. Elapsed: 6.131963547s Jul 20 00:16:35.248: INFO: Pod "pod-61836724-b013-4ea2-aba0-544a83252536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.363984734s STEP: Saw pod success Jul 20 00:16:35.248: INFO: Pod "pod-61836724-b013-4ea2-aba0-544a83252536" satisfied condition "success or failure" Jul 20 00:16:35.250: INFO: Trying to get logs from node iruya-worker2 pod pod-61836724-b013-4ea2-aba0-544a83252536 container test-container: STEP: delete the pod Jul 20 00:16:35.441: INFO: Waiting for pod pod-61836724-b013-4ea2-aba0-544a83252536 to disappear Jul 20 00:16:35.703: INFO: Pod pod-61836724-b013-4ea2-aba0-544a83252536 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:16:35.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3537" for this suite. Jul 20 00:16:41.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:16:41.949: INFO: namespace emptydir-3537 deletion completed in 6.241214007s • [SLOW TEST:15.202 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:16:41.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 20 00:16:42.025: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 20 00:16:47.029: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 20 00:16:47.029: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jul 20 00:16:47.049: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7801,SelfLink:/apis/apps/v1/namespaces/deployment-7801/deployments/test-cleanup-deployment,UID:3325cdd0-b406-43f4-8727-5a4a91a1a65f,ResourceVersion:43197,Generation:1,CreationTimestamp:2020-07-20 00:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jul 20 00:16:47.055: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7801,SelfLink:/apis/apps/v1/namespaces/deployment-7801/replicasets/test-cleanup-deployment-55bbcbc84c,UID:21545741-8b9a-4d36-9e9f-86342b9c2135,ResourceVersion:43199,Generation:1,CreationTimestamp:2020-07-20 00:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3325cdd0-b406-43f4-8727-5a4a91a1a65f 0xc003091bd7 0xc003091bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 20 00:16:47.055: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jul 20 00:16:47.055: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7801,SelfLink:/apis/apps/v1/namespaces/deployment-7801/replicasets/test-cleanup-controller,UID:c3bbdb93-d194-4ff6-9025-9d06c1b47d9f,ResourceVersion:43198,Generation:1,CreationTimestamp:2020-07-20 00:16:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3325cdd0-b406-43f4-8727-5a4a91a1a65f 0xc003091a67 0xc003091a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 20 00:16:47.123: INFO: Pod "test-cleanup-controller-cb4xh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-cb4xh,GenerateName:test-cleanup-controller-,Namespace:deployment-7801,SelfLink:/api/v1/namespaces/deployment-7801/pods/test-cleanup-controller-cb4xh,UID:8ef9c12d-f9ce-4142-871f-666216ad4eb1,ResourceVersion:43192,Generation:0,CreationTimestamp:2020-07-20 00:16:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c3bbdb93-d194-4ff6-9025-9d06c1b47d9f 0xc002b72667 0xc002b72668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ltwzn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ltwzn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ltwzn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b726e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b72700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:16:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:16:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:16:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:16:42 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.67,StartTime:2020-07-20 00:16:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-20 00:16:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://dff1dea560079346f62653c2807acc9973d24b22e6d1d60db6e3b435d433ab0f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 20 00:16:47.123: INFO: Pod "test-cleanup-deployment-55bbcbc84c-jlx4m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-jlx4m,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7801,SelfLink:/api/v1/namespaces/deployment-7801/pods/test-cleanup-deployment-55bbcbc84c-jlx4m,UID:29dc8467-c9bf-49bd-95f3-286bcc6774d8,ResourceVersion:43203,Generation:0,CreationTimestamp:2020-07-20 00:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 21545741-8b9a-4d36-9e9f-86342b9c2135 0xc002b727d7 0xc002b727d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ltwzn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ltwzn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-ltwzn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b72850} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b72870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:16:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:16:47.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7801" for this suite. Jul 20 00:16:53.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:16:53.416: INFO: namespace deployment-7801 deletion completed in 6.270556479s • [SLOW TEST:11.467 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:16:53.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 20 00:16:57.675: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:16:57.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2402" for this suite. Jul 20 00:17:03.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:17:03.791: INFO: namespace container-runtime-2402 deletion completed in 6.092443302s • [SLOW TEST:10.375 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:17:03.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:17:04.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5512" for this suite. Jul 20 00:17:10.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:17:10.170: INFO: namespace kubelet-test-5512 deletion completed in 6.095285291s • [SLOW TEST:6.378 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:17:10.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 20 00:17:17.674: INFO: 6 pods remaining Jul 20 00:17:17.675: INFO: 0 pods has nil DeletionTimestamp Jul 20 00:17:17.675: INFO: Jul 20 00:17:19.183: INFO: 0 pods remaining Jul 20 00:17:19.183: INFO: 0 pods has nil DeletionTimestamp Jul 20 00:17:19.183: INFO: STEP: Gathering metrics W0720 00:17:20.140800 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 20 00:17:20.140: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:17:20.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2089" for this suite. Jul 20 00:17:26.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:17:26.819: INFO: namespace gc-2089 deletion completed in 6.319355497s • [SLOW TEST:16.648 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:17:26.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 20 00:17:26.869: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jul 20 00:17:28.924: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:17:29.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5007" for this suite. Jul 20 00:17:38.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:17:38.367: INFO: namespace replication-controller-5007 deletion completed in 8.426564505s • [SLOW TEST:11.548 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:17:38.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-dc31890b-8977-4562-a091-0f9270a0fa38 in namespace container-probe-610 Jul 20 00:17:44.613: INFO: Started pod liveness-dc31890b-8977-4562-a091-0f9270a0fa38 in namespace container-probe-610 STEP: checking the pod's current state and verifying that restartCount is present Jul 20 00:17:44.615: INFO: Initial restart count of pod liveness-dc31890b-8977-4562-a091-0f9270a0fa38 is 0 Jul 20 00:18:04.803: INFO: Restart count of pod container-probe-610/liveness-dc31890b-8977-4562-a091-0f9270a0fa38 is now 1 (20.187882964s elapsed) Jul 20 00:18:24.925: INFO: Restart count of pod container-probe-610/liveness-dc31890b-8977-4562-a091-0f9270a0fa38 is now 2 (40.30988962s elapsed) Jul 20 00:18:44.966: INFO: Restart count of pod container-probe-610/liveness-dc31890b-8977-4562-a091-0f9270a0fa38 is now 3 (1m0.350750898s elapsed) Jul 20 00:19:05.409: INFO: Restart count of pod container-probe-610/liveness-dc31890b-8977-4562-a091-0f9270a0fa38 is now 4 (1m20.793594853s elapsed) Jul 20 00:20:09.846: INFO: Restart count of pod container-probe-610/liveness-dc31890b-8977-4562-a091-0f9270a0fa38 is now 5 (2m25.231241343s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:20:09.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-610" for this suite. Jul 20 00:20:15.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:20:16.047: INFO: namespace container-probe-610 deletion completed in 6.114428249s • [SLOW TEST:157.680 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:20:16.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 20 00:20:16.137: INFO: Waiting up to 5m0s for pod "pod-e6049101-635b-44b1-b434-176ef60ab76a" in namespace "emptydir-8743" to be "success or failure" Jul 20 00:20:16.141: INFO: Pod "pod-e6049101-635b-44b1-b434-176ef60ab76a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.583998ms Jul 20 00:20:18.150: INFO: Pod "pod-e6049101-635b-44b1-b434-176ef60ab76a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012655953s Jul 20 00:20:20.154: INFO: Pod "pod-e6049101-635b-44b1-b434-176ef60ab76a": Phase="Running", Reason="", readiness=true. Elapsed: 4.016345777s Jul 20 00:20:22.158: INFO: Pod "pod-e6049101-635b-44b1-b434-176ef60ab76a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02086434s STEP: Saw pod success Jul 20 00:20:22.158: INFO: Pod "pod-e6049101-635b-44b1-b434-176ef60ab76a" satisfied condition "success or failure" Jul 20 00:20:22.161: INFO: Trying to get logs from node iruya-worker pod pod-e6049101-635b-44b1-b434-176ef60ab76a container test-container: STEP: delete the pod Jul 20 00:20:22.187: INFO: Waiting for pod pod-e6049101-635b-44b1-b434-176ef60ab76a to disappear Jul 20 00:20:22.191: INFO: Pod pod-e6049101-635b-44b1-b434-176ef60ab76a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:20:22.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8743" for this suite. Jul 20 00:20:28.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:20:28.284: INFO: namespace emptydir-8743 deletion completed in 6.089226712s • [SLOW TEST:12.237 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:20:28.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 20 00:20:36.467: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 00:20:36.498: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 00:20:38.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 00:20:38.502: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 00:20:40.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 00:20:40.502: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 00:20:42.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 00:20:42.502: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 00:20:44.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 00:20:44.502: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 00:20:46.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 00:20:46.502: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:20:46.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8421" for this suite. Jul 20 00:21:14.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:21:14.603: INFO: namespace container-lifecycle-hook-8421 deletion completed in 28.090354374s • [SLOW TEST:46.318 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:21:14.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0720 00:21:45.279439 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 20 00:21:45.279: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:21:45.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-156" for this suite. Jul 20 00:21:51.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:21:51.403: INFO: namespace gc-156 deletion completed in 6.121356774s • [SLOW TEST:36.800 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:21:51.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-3469d0bc-dcd6-4b23-b884-b0429ba56a0c STEP: Creating a pod to test consume configMaps Jul 20 00:21:51.780: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5cefae8-f8a6-4820-b775-d1f123111453" in namespace "configmap-6693" to be "success or failure" Jul 20 00:21:51.850: INFO: Pod "pod-configmaps-a5cefae8-f8a6-4820-b775-d1f123111453": Phase="Pending", Reason="", readiness=false. Elapsed: 69.640747ms Jul 20 00:21:53.976: INFO: Pod "pod-configmaps-a5cefae8-f8a6-4820-b775-d1f123111453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195590119s Jul 20 00:21:55.980: INFO: Pod "pod-configmaps-a5cefae8-f8a6-4820-b775-d1f123111453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199801358s STEP: Saw pod success Jul 20 00:21:55.980: INFO: Pod "pod-configmaps-a5cefae8-f8a6-4820-b775-d1f123111453" satisfied condition "success or failure" Jul 20 00:21:55.983: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a5cefae8-f8a6-4820-b775-d1f123111453 container configmap-volume-test: STEP: delete the pod Jul 20 00:21:56.006: INFO: Waiting for pod pod-configmaps-a5cefae8-f8a6-4820-b775-d1f123111453 to disappear Jul 20 00:21:56.024: INFO: Pod pod-configmaps-a5cefae8-f8a6-4820-b775-d1f123111453 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:21:56.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6693" for this suite. Jul 20 00:22:02.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:22:02.122: INFO: namespace configmap-6693 deletion completed in 6.094078998s • [SLOW TEST:10.718 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:22:02.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:22:06.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8195" for this suite. Jul 20 00:22:48.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:22:48.334: INFO: namespace kubelet-test-8195 deletion completed in 42.113937815s • [SLOW TEST:46.211 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:22:48.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 20 00:22:48.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1888' Jul 20 00:22:51.094: INFO: stderr: "" Jul 20 00:22:51.094: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jul 20 00:22:51.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1888' Jul 20 00:22:55.660: INFO: stderr: "" Jul 20 00:22:55.660: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:22:55.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1888" for this suite. Jul 20 00:23:01.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:23:01.763: INFO: namespace kubectl-1888 deletion completed in 6.099316909s • [SLOW TEST:13.429 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:23:01.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7ea87784-acf4-4e6b-809e-31fc3ff9d461 STEP: Creating a pod to test consume secrets Jul 20 00:23:02.030: INFO: Waiting up to 5m0s for pod "pod-secrets-7f4fe86c-279a-44ce-8de3-4642ea7c129d" in namespace "secrets-3777" to be "success or failure" Jul 20 00:23:02.032: INFO: Pod "pod-secrets-7f4fe86c-279a-44ce-8de3-4642ea7c129d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226225ms Jul 20 00:23:04.072: INFO: Pod "pod-secrets-7f4fe86c-279a-44ce-8de3-4642ea7c129d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042745599s Jul 20 00:23:06.078: INFO: Pod "pod-secrets-7f4fe86c-279a-44ce-8de3-4642ea7c129d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048148208s STEP: Saw pod success Jul 20 00:23:06.078: INFO: Pod "pod-secrets-7f4fe86c-279a-44ce-8de3-4642ea7c129d" satisfied condition "success or failure" Jul 20 00:23:06.080: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-7f4fe86c-279a-44ce-8de3-4642ea7c129d container secret-volume-test: STEP: delete the pod Jul 20 00:23:06.100: INFO: Waiting for pod pod-secrets-7f4fe86c-279a-44ce-8de3-4642ea7c129d to disappear Jul 20 00:23:06.117: INFO: Pod pod-secrets-7f4fe86c-279a-44ce-8de3-4642ea7c129d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:23:06.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3777" for this suite. Jul 20 00:23:12.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:23:12.209: INFO: namespace secrets-3777 deletion completed in 6.089531134s • [SLOW TEST:10.446 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:23:12.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 20 00:23:12.323: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d79d0405-855f-49f3-a97a-b4d305781ecc" in namespace "downward-api-1201" to be "success or failure" Jul 20 00:23:12.326: INFO: Pod "downwardapi-volume-d79d0405-855f-49f3-a97a-b4d305781ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.597458ms Jul 20 00:23:14.468: INFO: Pod "downwardapi-volume-d79d0405-855f-49f3-a97a-b4d305781ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145047153s Jul 20 00:23:16.472: INFO: Pod "downwardapi-volume-d79d0405-855f-49f3-a97a-b4d305781ecc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149419646s STEP: Saw pod success Jul 20 00:23:16.472: INFO: Pod "downwardapi-volume-d79d0405-855f-49f3-a97a-b4d305781ecc" satisfied condition "success or failure" Jul 20 00:23:16.475: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d79d0405-855f-49f3-a97a-b4d305781ecc container client-container: STEP: delete the pod Jul 20 00:23:16.517: INFO: Waiting for pod downwardapi-volume-d79d0405-855f-49f3-a97a-b4d305781ecc to disappear Jul 20 00:23:16.569: INFO: Pod downwardapi-volume-d79d0405-855f-49f3-a97a-b4d305781ecc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:23:16.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1201" for this suite. Jul 20 00:23:22.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:23:22.704: INFO: namespace downward-api-1201 deletion completed in 6.130821883s • [SLOW TEST:10.494 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:23:22.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 20 00:23:22.828: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:22.831: INFO: Number of nodes with available pods: 0 Jul 20 00:23:22.831: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:23.835: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:23.837: INFO: Number of nodes with available pods: 0 Jul 20 00:23:23.837: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:24.836: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:24.839: INFO: Number of nodes with available pods: 0 Jul 20 00:23:24.839: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:25.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:25.969: INFO: Number of nodes with available pods: 0 Jul 20 00:23:25.969: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:26.836: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:26.839: INFO: Number of nodes with available pods: 1 Jul 20 00:23:26.839: INFO: Node iruya-worker2 is running more than one daemon pod Jul 20 00:23:27.836: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:27.839: INFO: Number of nodes with available pods: 2 Jul 20 00:23:27.839: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 20 00:23:27.875: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:27.878: INFO: Number of nodes with available pods: 1 Jul 20 00:23:27.878: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:28.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:28.886: INFO: Number of nodes with available pods: 1 Jul 20 00:23:28.886: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:29.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:29.887: INFO: Number of nodes with available pods: 1 Jul 20 00:23:29.887: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:30.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:30.887: INFO: Number of nodes with available pods: 1 Jul 20 00:23:30.887: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:31.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:31.885: INFO: Number of nodes with available pods: 1 Jul 20 00:23:31.885: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:32.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:32.886: INFO: Number of nodes with available pods: 1 Jul 20 00:23:32.886: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:33.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:33.887: INFO: Number of nodes with available pods: 1 Jul 20 00:23:33.887: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:34.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:34.886: INFO: Number of nodes with available pods: 1 Jul 20 00:23:34.886: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:35.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:35.886: INFO: Number of nodes with available pods: 1 Jul 20 00:23:35.886: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:36.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:36.887: INFO: Number of nodes with available pods: 1 Jul 20 00:23:36.887: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:37.882: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:37.885: INFO: Number of nodes with available pods: 1 Jul 20 00:23:37.885: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:23:38.884: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:23:38.886: INFO: Number of nodes with available pods: 2 Jul 20 00:23:38.886: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6754, will wait for the garbage collector to delete the pods Jul 20 00:23:38.948: INFO: Deleting DaemonSet.extensions daemon-set took: 5.975786ms Jul 20 00:23:39.248: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.274533ms Jul 20 00:23:46.452: INFO: Number of nodes with available pods: 0 Jul 20 00:23:46.452: INFO: Number of running nodes: 0, number of available pods: 0 Jul 20 00:23:46.454: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6754/daemonsets","resourceVersion":"44652"},"items":null} Jul 20 00:23:46.457: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6754/pods","resourceVersion":"44652"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:23:46.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6754" for this suite. Jul 20 00:23:54.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:23:54.558: INFO: namespace daemonsets-6754 deletion completed in 8.089796308s • [SLOW TEST:31.854 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:23:54.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jul 20 00:23:54.616: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jul 20 00:23:54.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9725' Jul 20 00:23:54.971: INFO: stderr: "" Jul 20 00:23:54.971: INFO: stdout: "service/redis-slave created\n" Jul 20 00:23:54.971: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jul 20 00:23:54.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9725' Jul 20 00:23:55.603: INFO: stderr: "" Jul 20 00:23:55.603: INFO: stdout: "service/redis-master created\n" Jul 20 00:23:55.603: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jul 20 00:23:55.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9725' Jul 20 00:23:56.053: INFO: stderr: "" Jul 20 00:23:56.053: INFO: stdout: "service/frontend created\n" Jul 20 00:23:56.053: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jul 20 00:23:56.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9725' Jul 20 00:23:56.341: INFO: stderr: "" Jul 20 00:23:56.341: INFO: stdout: "deployment.apps/frontend created\n" Jul 20 00:23:56.341: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 20 00:23:56.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9725' Jul 20 00:23:56.716: INFO: stderr: "" Jul 20 00:23:56.716: INFO: stdout: "deployment.apps/redis-master created\n" Jul 20 00:23:56.716: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jul 20 00:23:56.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9725' Jul 20 00:23:56.991: INFO: stderr: "" Jul 20 00:23:56.991: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jul 20 00:23:56.991: INFO: Waiting for all frontend pods to be Running. Jul 20 00:24:07.042: INFO: Waiting for frontend to serve content. Jul 20 00:24:07.060: INFO: Trying to add a new entry to the guestbook. Jul 20 00:24:07.075: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jul 20 00:24:07.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9725' Jul 20 00:24:07.231: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 00:24:07.231: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jul 20 00:24:07.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9725' Jul 20 00:24:07.416: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 00:24:07.416: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 20 00:24:07.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9725' Jul 20 00:24:07.532: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 00:24:07.532: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 20 00:24:07.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9725' Jul 20 00:24:07.634: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 00:24:07.634: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 20 00:24:07.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9725' Jul 20 00:24:07.761: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 00:24:07.761: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 20 00:24:07.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9725' Jul 20 00:24:07.968: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 00:24:07.968: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:24:07.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9725" for this suite. Jul 20 00:24:46.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:24:46.853: INFO: namespace kubectl-9725 deletion completed in 38.785283041s • [SLOW TEST:52.293 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:24:46.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 20 00:24:46.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9473' Jul 20 00:24:47.168: INFO: stderr: "" Jul 20 00:24:47.168: INFO: stdout: "replicationcontroller/redis-master created\n" Jul 20 00:24:47.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9473' Jul 20 00:24:47.509: INFO: stderr: "" Jul 20 00:24:47.509: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jul 20 00:24:48.514: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:24:48.514: INFO: Found 0 / 1 Jul 20 00:24:49.643: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:24:49.643: INFO: Found 0 / 1 Jul 20 00:24:50.513: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:24:50.513: INFO: Found 0 / 1 Jul 20 00:24:51.514: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:24:51.514: INFO: Found 1 / 1 Jul 20 00:24:51.514: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 20 00:24:51.518: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:24:51.518: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 20 00:24:51.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-dfvgq --namespace=kubectl-9473' Jul 20 00:24:51.614: INFO: stderr: "" Jul 20 00:24:51.614: INFO: stdout: "Name: redis-master-dfvgq\nNamespace: kubectl-9473\nPriority: 0\nNode: iruya-worker/172.18.0.5\nStart Time: Mon, 20 Jul 2020 00:24:47 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.85\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://421e653f1bf5e4f90d42adf4e1666425251f5c6c71eec8b8d099f12088cbb950\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 20 Jul 2020 00:24:50 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-pln5f (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-pln5f:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-pln5f\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-9473/redis-master-dfvgq to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Jul 20 00:24:51.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9473' Jul 20 00:24:51.728: INFO: stderr: "" Jul 20 00:24:51.728: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9473\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-dfvgq\n" Jul 20 00:24:51.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9473' Jul 20 00:24:51.826: INFO: stderr: "" Jul 20 00:24:51.826: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9473\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.98.173.74\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.85:6379\nSession Affinity: None\nEvents: \n" Jul 20 00:24:51.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Jul 20 00:24:51.954: INFO: stderr: "" Jul 20 00:24:51.954: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 19 Jul 2020 21:15:33 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 20 Jul 2020 00:24:18 +0000 Sun, 19 Jul 2020 21:15:33 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 20 Jul 2020 00:24:18 +0000 Sun, 19 Jul 2020 21:15:33 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 20 Jul 2020 00:24:18 +0000 Sun, 19 Jul 2020 21:15:33 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 20 Jul 2020 00:24:18 +0000 Sun, 19 Jul 2020 21:16:03 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.9\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: ca83ac9a93d54502bb9afb972c3f1f0b\n System UUID: 1d4ac873-683f-4805-8579-15bbb4e4df77\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version: v1.15.12\n Kube-Proxy Version: v1.15.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-5d4dd4b4db-clz9n 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 3h8m\n kube-system coredns-5d4dd4b4db-w42x4 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 3h8m\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h8m\n kube-system kindnet-xbjsm 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 3h8m\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 3h8m\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 3h8m\n kube-system kube-proxy-nwhvb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h8m\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3h8m\n local-path-storage local-path-provisioner-668779bd7-sf66r 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h8m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jul 20 00:24:51.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9473' Jul 20 00:24:52.064: INFO: stderr: "" Jul 20 00:24:52.064: INFO: stdout: "Name: kubectl-9473\nLabels: e2e-framework=kubectl\n e2e-run=d5cd44e8-f7c1-452b-8ff7-a341910ef756\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:24:52.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9473" for this suite. Jul 20 00:25:14.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:25:14.162: INFO: namespace kubectl-9473 deletion completed in 22.093598592s • [SLOW TEST:27.308 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:25:14.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jul 20 00:25:14.291: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jul 20 00:25:14.642: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jul 20 00:25:16.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 00:25:18.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 00:25:20.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 00:25:22.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 00:25:24.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 00:25:26.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801514, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 00:25:29.423: INFO: Waited 624.659763ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:25:30.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6443" for this suite. Jul 20 00:25:36.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:25:36.440: INFO: namespace aggregator-6443 deletion completed in 6.289436444s • [SLOW TEST:22.278 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:25:36.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 20 00:25:36.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df05f668-54ab-424c-a6cf-38006bd391e1" in namespace "projected-5179" to be "success or failure" Jul 20 00:25:36.512: INFO: Pod "downwardapi-volume-df05f668-54ab-424c-a6cf-38006bd391e1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.254807ms Jul 20 00:25:38.517: INFO: Pod "downwardapi-volume-df05f668-54ab-424c-a6cf-38006bd391e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00786697s Jul 20 00:25:40.521: INFO: Pod "downwardapi-volume-df05f668-54ab-424c-a6cf-38006bd391e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012031599s STEP: Saw pod success Jul 20 00:25:40.521: INFO: Pod "downwardapi-volume-df05f668-54ab-424c-a6cf-38006bd391e1" satisfied condition "success or failure" Jul 20 00:25:40.525: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-df05f668-54ab-424c-a6cf-38006bd391e1 container client-container: STEP: delete the pod Jul 20 00:25:40.565: INFO: Waiting for pod downwardapi-volume-df05f668-54ab-424c-a6cf-38006bd391e1 to disappear Jul 20 00:25:40.572: INFO: Pod downwardapi-volume-df05f668-54ab-424c-a6cf-38006bd391e1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:25:40.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5179" for this suite. Jul 20 00:25:46.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:25:46.663: INFO: namespace projected-5179 deletion completed in 6.087008088s • [SLOW TEST:10.222 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:25:46.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jul 20 00:25:51.343: INFO: Successfully updated pod "labelsupdate39e293ae-335d-448a-a4aa-226d307138a5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:25:55.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-645" for this suite. Jul 20 00:26:17.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:26:17.507: INFO: namespace projected-645 deletion completed in 22.093904865s • [SLOW TEST:30.844 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:26:17.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-b5455c2c-423e-4035-a485-567b857e60d0 STEP: Creating a pod to test consume configMaps Jul 20 00:26:17.633: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c19b874b-85e7-45d9-bc14-3e05b4f70aef" in namespace "projected-3096" to be "success or failure" Jul 20 00:26:17.639: INFO: Pod "pod-projected-configmaps-c19b874b-85e7-45d9-bc14-3e05b4f70aef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.683308ms Jul 20 00:26:19.643: INFO: Pod "pod-projected-configmaps-c19b874b-85e7-45d9-bc14-3e05b4f70aef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010200209s Jul 20 00:26:21.647: INFO: Pod "pod-projected-configmaps-c19b874b-85e7-45d9-bc14-3e05b4f70aef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014477814s STEP: Saw pod success Jul 20 00:26:21.647: INFO: Pod "pod-projected-configmaps-c19b874b-85e7-45d9-bc14-3e05b4f70aef" satisfied condition "success or failure" Jul 20 00:26:21.650: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-c19b874b-85e7-45d9-bc14-3e05b4f70aef container projected-configmap-volume-test: STEP: delete the pod Jul 20 00:26:21.670: INFO: Waiting for pod pod-projected-configmaps-c19b874b-85e7-45d9-bc14-3e05b4f70aef to disappear Jul 20 00:26:21.674: INFO: Pod pod-projected-configmaps-c19b874b-85e7-45d9-bc14-3e05b4f70aef no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:26:21.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3096" for this suite. Jul 20 00:26:27.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:26:27.779: INFO: namespace projected-3096 deletion completed in 6.101799093s • [SLOW TEST:10.271 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:26:27.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 20 00:26:27.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jul 20 00:26:28.023: INFO: stderr: "" Jul 20 00:26:28.023: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T18:54:28Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:08:45Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:26:28.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2894" for this suite. Jul 20 00:26:34.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:26:34.122: INFO: namespace kubectl-2894 deletion completed in 6.09364509s • [SLOW TEST:6.342 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:26:34.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 20 00:26:40.232: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-9e0bae11-7280-43fc-8bd2-f0fcabde59b7,GenerateName:,Namespace:events-8545,SelfLink:/api/v1/namespaces/events-8545/pods/send-events-9e0bae11-7280-43fc-8bd2-f0fcabde59b7,UID:5aaeaabd-a78e-41c5-8b99-e7e081ac79d4,ResourceVersion:45415,Generation:0,CreationTimestamp:2020-07-20 00:26:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 209524943,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qvwjq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qvwjq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-qvwjq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00273db80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00273dba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:26:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:26:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:26:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:26:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.88,StartTime:2020-07-20 00:26:34 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-07-20 00:26:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://571bad763141003a6dcabecc82f56162fba3c3745f154ed674f24ef893daba74}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jul 20 00:26:42.238: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 20 00:26:44.243: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:26:44.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8545" for this suite. Jul 20 00:27:26.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:27:26.422: INFO: namespace events-8545 deletion completed in 42.140633259s • [SLOW TEST:52.300 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:27:26.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 20 00:27:26.544: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:27:26.555: INFO: Number of nodes with available pods: 0 Jul 20 00:27:26.555: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:27:27.559: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:27:27.561: INFO: Number of nodes with available pods: 0 Jul 20 00:27:27.561: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:27:28.561: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:27:28.565: INFO: Number of nodes with available pods: 0 Jul 20 00:27:28.565: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:27:29.561: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:27:29.564: INFO: Number of nodes with available pods: 0 Jul 20 00:27:29.564: INFO: Node iruya-worker is running more than one daemon pod Jul 20 00:27:30.560: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:27:30.563: INFO: Number of nodes with available pods: 1 Jul 20 00:27:30.563: INFO: Node iruya-worker2 is running more than one daemon pod Jul 20 00:27:31.586: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:27:31.589: INFO: Number of nodes with available pods: 2 Jul 20 00:27:31.589: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 20 00:27:31.623: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 00:27:31.639: INFO: Number of nodes with available pods: 2 Jul 20 00:27:31.639: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7317, will wait for the garbage collector to delete the pods Jul 20 00:27:32.818: INFO: Deleting DaemonSet.extensions daemon-set took: 5.368193ms Jul 20 00:27:33.118: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.244035ms Jul 20 00:27:46.321: INFO: Number of nodes with available pods: 0 Jul 20 00:27:46.321: INFO: Number of running nodes: 0, number of available pods: 0 Jul 20 00:27:46.324: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7317/daemonsets","resourceVersion":"45614"},"items":null} Jul 20 00:27:46.326: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7317/pods","resourceVersion":"45614"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:27:46.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7317" for this suite. Jul 20 00:27:52.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:27:52.472: INFO: namespace daemonsets-7317 deletion completed in 6.131431673s • [SLOW TEST:26.049 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:27:52.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:27:57.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5068" for this suite. Jul 20 00:28:04.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:28:04.165: INFO: namespace watch-5068 deletion completed in 6.181241415s • [SLOW TEST:11.693 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:28:04.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:28:08.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2474" for this suite. Jul 20 00:29:00.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:29:00.623: INFO: namespace kubelet-test-2474 deletion completed in 52.312742435s • [SLOW TEST:56.458 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:29:00.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jul 20 00:29:01.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1782' Jul 20 00:29:04.104: INFO: stderr: "" Jul 20 00:29:04.104: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jul 20 00:29:05.691: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:05.691: INFO: Found 0 / 1 Jul 20 00:29:06.245: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:06.245: INFO: Found 0 / 1 Jul 20 00:29:07.109: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:07.109: INFO: Found 0 / 1 Jul 20 00:29:08.473: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:08.473: INFO: Found 0 / 1 Jul 20 00:29:09.109: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:09.109: INFO: Found 0 / 1 Jul 20 00:29:10.109: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:10.109: INFO: Found 0 / 1 Jul 20 00:29:11.191: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:11.192: INFO: Found 0 / 1 Jul 20 00:29:12.109: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:12.109: INFO: Found 0 / 1 Jul 20 00:29:13.258: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:13.258: INFO: Found 0 / 1 Jul 20 00:29:14.179: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:14.179: INFO: Found 0 / 1 Jul 20 00:29:15.241: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:15.241: INFO: Found 0 / 1 Jul 20 00:29:16.186: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:16.186: INFO: Found 1 / 1 Jul 20 00:29:16.186: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 20 00:29:16.188: INFO: Selector matched 1 pods for map[app:redis] Jul 20 00:29:16.188: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jul 20 00:29:16.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x2lkm redis-master --namespace=kubectl-1782' Jul 20 00:29:16.279: INFO: stderr: "" Jul 20 00:29:16.279: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Jul 00:29:14.611 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Jul 00:29:14.612 # Server started, Redis version 3.2.12\n1:M 20 Jul 00:29:14.612 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Jul 00:29:14.612 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jul 20 00:29:16.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x2lkm redis-master --namespace=kubectl-1782 --tail=1' Jul 20 00:29:16.449: INFO: stderr: "" Jul 20 00:29:16.449: INFO: stdout: "1:M 20 Jul 00:29:14.612 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jul 20 00:29:16.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x2lkm redis-master --namespace=kubectl-1782 --limit-bytes=1' Jul 20 00:29:16.545: INFO: stderr: "" Jul 20 00:29:16.545: INFO: stdout: " " STEP: exposing timestamps Jul 20 00:29:16.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x2lkm redis-master --namespace=kubectl-1782 --tail=1 --timestamps' Jul 20 00:29:16.656: INFO: stderr: "" Jul 20 00:29:16.656: INFO: stdout: "2020-07-20T00:29:14.843754068Z 1:M 20 Jul 00:29:14.612 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jul 20 00:29:19.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x2lkm redis-master --namespace=kubectl-1782 --since=1s' Jul 20 00:29:19.262: INFO: stderr: "" Jul 20 00:29:19.262: INFO: stdout: "" Jul 20 00:29:19.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x2lkm redis-master --namespace=kubectl-1782 --since=24h' Jul 20 00:29:19.364: INFO: stderr: "" Jul 20 00:29:19.364: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Jul 00:29:14.611 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Jul 00:29:14.612 # Server started, Redis version 3.2.12\n1:M 20 Jul 00:29:14.612 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Jul 00:29:14.612 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jul 20 00:29:19.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1782' Jul 20 00:29:19.520: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 20 00:29:19.520: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jul 20 00:29:19.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1782' Jul 20 00:29:19.622: INFO: stderr: "No resources found.\n" Jul 20 00:29:19.622: INFO: stdout: "" Jul 20 00:29:19.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1782 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 20 00:29:19.917: INFO: stderr: "" Jul 20 00:29:19.917: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:29:19.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1782" for this suite. Jul 20 00:29:41.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:29:42.092: INFO: namespace kubectl-1782 deletion completed in 22.131419074s • [SLOW TEST:41.468 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:29:42.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 20 00:29:42.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ded4d50-9f9d-49ee-8b68-3dd77a59a98f" in namespace "projected-5141" to be "success or failure" Jul 20 00:29:42.199: INFO: Pod "downwardapi-volume-6ded4d50-9f9d-49ee-8b68-3dd77a59a98f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.935575ms Jul 20 00:29:44.281: INFO: Pod "downwardapi-volume-6ded4d50-9f9d-49ee-8b68-3dd77a59a98f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086853219s Jul 20 00:29:46.285: INFO: Pod "downwardapi-volume-6ded4d50-9f9d-49ee-8b68-3dd77a59a98f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090557466s STEP: Saw pod success Jul 20 00:29:46.285: INFO: Pod "downwardapi-volume-6ded4d50-9f9d-49ee-8b68-3dd77a59a98f" satisfied condition "success or failure" Jul 20 00:29:46.289: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-6ded4d50-9f9d-49ee-8b68-3dd77a59a98f container client-container: STEP: delete the pod Jul 20 00:29:46.331: INFO: Waiting for pod downwardapi-volume-6ded4d50-9f9d-49ee-8b68-3dd77a59a98f to disappear Jul 20 00:29:46.431: INFO: Pod downwardapi-volume-6ded4d50-9f9d-49ee-8b68-3dd77a59a98f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:29:46.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5141" for this suite. Jul 20 00:29:52.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:29:52.644: INFO: namespace projected-5141 deletion completed in 6.208624122s • [SLOW TEST:10.552 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:29:52.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 20 00:29:52.848: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8638,SelfLink:/api/v1/namespaces/watch-8638/configmaps/e2e-watch-test-watch-closed,UID:ed14dc51-47c9-4494-8237-93e2bcfb838e,ResourceVersion:46083,Generation:0,CreationTimestamp:2020-07-20 00:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 20 00:29:52.848: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8638,SelfLink:/api/v1/namespaces/watch-8638/configmaps/e2e-watch-test-watch-closed,UID:ed14dc51-47c9-4494-8237-93e2bcfb838e,ResourceVersion:46084,Generation:0,CreationTimestamp:2020-07-20 00:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 20 00:29:53.047: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8638,SelfLink:/api/v1/namespaces/watch-8638/configmaps/e2e-watch-test-watch-closed,UID:ed14dc51-47c9-4494-8237-93e2bcfb838e,ResourceVersion:46085,Generation:0,CreationTimestamp:2020-07-20 00:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 20 00:29:53.047: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8638,SelfLink:/api/v1/namespaces/watch-8638/configmaps/e2e-watch-test-watch-closed,UID:ed14dc51-47c9-4494-8237-93e2bcfb838e,ResourceVersion:46086,Generation:0,CreationTimestamp:2020-07-20 00:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:29:53.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8638" for this suite. Jul 20 00:29:59.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:29:59.169: INFO: namespace watch-8638 deletion completed in 6.088879293s • [SLOW TEST:6.524 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:29:59.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jul 20 00:29:59.321: INFO: Waiting up to 5m0s for pod "client-containers-c0f09a74-45f7-466a-959c-c17648bccc7a" in namespace "containers-8369" to be "success or failure" Jul 20 00:29:59.511: INFO: Pod "client-containers-c0f09a74-45f7-466a-959c-c17648bccc7a": Phase="Pending", Reason="", readiness=false. Elapsed: 190.338117ms Jul 20 00:30:01.515: INFO: Pod "client-containers-c0f09a74-45f7-466a-959c-c17648bccc7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194343747s Jul 20 00:30:03.519: INFO: Pod "client-containers-c0f09a74-45f7-466a-959c-c17648bccc7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.198099193s STEP: Saw pod success Jul 20 00:30:03.519: INFO: Pod "client-containers-c0f09a74-45f7-466a-959c-c17648bccc7a" satisfied condition "success or failure" Jul 20 00:30:03.521: INFO: Trying to get logs from node iruya-worker pod client-containers-c0f09a74-45f7-466a-959c-c17648bccc7a container test-container: STEP: delete the pod Jul 20 00:30:03.559: INFO: Waiting for pod client-containers-c0f09a74-45f7-466a-959c-c17648bccc7a to disappear Jul 20 00:30:03.625: INFO: Pod client-containers-c0f09a74-45f7-466a-959c-c17648bccc7a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:30:03.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8369" for this suite. Jul 20 00:30:09.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:30:09.792: INFO: namespace containers-8369 deletion completed in 6.161980357s • [SLOW TEST:10.623 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:30:09.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:30:18.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8830" for this suite. Jul 20 00:30:24.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:30:24.454: INFO: namespace kubelet-test-8830 deletion completed in 6.167679274s • [SLOW TEST:14.662 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:30:24.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jul 20 00:30:24.544: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5702,SelfLink:/api/v1/namespaces/watch-5702/configmaps/e2e-watch-test-label-changed,UID:99dbc97a-6a7c-4771-8765-34f9b5887727,ResourceVersion:46204,Generation:0,CreationTimestamp:2020-07-20 00:30:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 20 00:30:24.544: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5702,SelfLink:/api/v1/namespaces/watch-5702/configmaps/e2e-watch-test-label-changed,UID:99dbc97a-6a7c-4771-8765-34f9b5887727,ResourceVersion:46205,Generation:0,CreationTimestamp:2020-07-20 00:30:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 20 00:30:24.544: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5702,SelfLink:/api/v1/namespaces/watch-5702/configmaps/e2e-watch-test-label-changed,UID:99dbc97a-6a7c-4771-8765-34f9b5887727,ResourceVersion:46206,Generation:0,CreationTimestamp:2020-07-20 00:30:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jul 20 00:30:34.590: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5702,SelfLink:/api/v1/namespaces/watch-5702/configmaps/e2e-watch-test-label-changed,UID:99dbc97a-6a7c-4771-8765-34f9b5887727,ResourceVersion:46227,Generation:0,CreationTimestamp:2020-07-20 00:30:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 20 00:30:34.590: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5702,SelfLink:/api/v1/namespaces/watch-5702/configmaps/e2e-watch-test-label-changed,UID:99dbc97a-6a7c-4771-8765-34f9b5887727,ResourceVersion:46228,Generation:0,CreationTimestamp:2020-07-20 00:30:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jul 20 00:30:34.590: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5702,SelfLink:/api/v1/namespaces/watch-5702/configmaps/e2e-watch-test-label-changed,UID:99dbc97a-6a7c-4771-8765-34f9b5887727,ResourceVersion:46229,Generation:0,CreationTimestamp:2020-07-20 00:30:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:30:34.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5702" for this suite. Jul 20 00:30:40.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:30:40.687: INFO: namespace watch-5702 deletion completed in 6.091943124s • [SLOW TEST:16.232 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:30:40.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 20 00:30:40.848: INFO: Waiting up to 5m0s for pod "pod-dc37805a-e7b9-479a-ae80-36177afd3558" in namespace "emptydir-9963" to be "success or failure" Jul 20 00:30:40.852: INFO: Pod "pod-dc37805a-e7b9-479a-ae80-36177afd3558": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139671ms Jul 20 00:30:43.138: INFO: Pod "pod-dc37805a-e7b9-479a-ae80-36177afd3558": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290433672s Jul 20 00:30:45.143: INFO: Pod "pod-dc37805a-e7b9-479a-ae80-36177afd3558": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.294618709s STEP: Saw pod success Jul 20 00:30:45.143: INFO: Pod "pod-dc37805a-e7b9-479a-ae80-36177afd3558" satisfied condition "success or failure" Jul 20 00:30:45.146: INFO: Trying to get logs from node iruya-worker pod pod-dc37805a-e7b9-479a-ae80-36177afd3558 container test-container: STEP: delete the pod Jul 20 00:30:45.178: INFO: Waiting for pod pod-dc37805a-e7b9-479a-ae80-36177afd3558 to disappear Jul 20 00:30:45.181: INFO: Pod pod-dc37805a-e7b9-479a-ae80-36177afd3558 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:30:45.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9963" for this suite. Jul 20 00:30:51.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:30:51.320: INFO: namespace emptydir-9963 deletion completed in 6.134750962s • [SLOW TEST:10.632 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:30:51.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 20 00:30:51.350: INFO: Creating deployment "test-recreate-deployment" Jul 20 00:30:51.376: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jul 20 00:30:51.410: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jul 20 00:30:53.418: INFO: Waiting deployment "test-recreate-deployment" to complete Jul 20 00:30:53.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801851, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801851, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801851, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801851, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 00:30:55.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801851, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801851, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801851, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730801851, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 00:30:57.438: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jul 20 00:30:57.446: INFO: Updating deployment test-recreate-deployment Jul 20 00:30:57.446: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jul 20 00:30:57.763: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-8602,SelfLink:/apis/apps/v1/namespaces/deployment-8602/deployments/test-recreate-deployment,UID:535db1a8-a6a0-4278-9ab2-e179f6f68f57,ResourceVersion:46344,Generation:2,CreationTimestamp:2020-07-20 00:30:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-07-20 00:30:57 +0000 UTC 2020-07-20 00:30:57 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-20 00:30:57 +0000 UTC 2020-07-20 00:30:51 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jul 20 00:30:57.766: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-8602,SelfLink:/apis/apps/v1/namespaces/deployment-8602/replicasets/test-recreate-deployment-5c8c9cc69d,UID:6256e496-d8e9-4023-9846-588ee12bf6eb,ResourceVersion:46341,Generation:1,CreationTimestamp:2020-07-20 00:30:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 535db1a8-a6a0-4278-9ab2-e179f6f68f57 0xc0030e8557 0xc0030e8558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 20 00:30:57.767: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jul 20 00:30:57.767: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-8602,SelfLink:/apis/apps/v1/namespaces/deployment-8602/replicasets/test-recreate-deployment-6df85df6b9,UID:b62120b2-57ba-4d91-a5e2-d35acb7dfd12,ResourceVersion:46332,Generation:2,CreationTimestamp:2020-07-20 00:30:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 535db1a8-a6a0-4278-9ab2-e179f6f68f57 0xc0030e8627 0xc0030e8628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 20 00:30:57.990: INFO: Pod "test-recreate-deployment-5c8c9cc69d-5fflm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-5fflm,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-8602,SelfLink:/api/v1/namespaces/deployment-8602/pods/test-recreate-deployment-5c8c9cc69d-5fflm,UID:042eaacd-4596-4e77-ad45-f6e1bda68831,ResourceVersion:46345,Generation:0,CreationTimestamp:2020-07-20 00:30:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 6256e496-d8e9-4023-9846-588ee12bf6eb 0xc0030e8f07 0xc0030e8f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ccqgk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ccqgk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ccqgk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030e8f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030e8fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:30:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:30:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:30:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:30:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-20 00:30:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:30:57.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8602" for this suite. Jul 20 00:31:04.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:31:04.219: INFO: namespace deployment-8602 deletion completed in 6.203173804s • [SLOW TEST:12.899 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:31:04.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 20 00:31:14.443: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 00:31:14.463: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 00:31:16.463: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 00:31:16.467: INFO: Pod pod-with-poststart-http-hook still exists Jul 20 00:31:18.463: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 20 00:31:18.485: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:31:18.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2851" for this suite. Jul 20 00:31:42.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:31:42.580: INFO: namespace container-lifecycle-hook-2851 deletion completed in 24.092612024s • [SLOW TEST:38.361 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:31:42.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jul 20 00:31:42.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jul 20 00:31:42.830: INFO: stderr: "" Jul 20 00:31:42.830: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:38261\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:38261/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:31:42.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7727" for this suite. Jul 20 00:31:48.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:31:49.007: INFO: namespace kubectl-7727 deletion completed in 6.163603627s • [SLOW TEST:6.427 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:31:49.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jul 20 00:31:49.112: INFO: Waiting up to 5m0s for pod "downward-api-6b9e39cd-a3c2-471d-9b51-13b2cf7593aa" in namespace "downward-api-5887" to be "success or failure" Jul 20 00:31:49.117: INFO: Pod "downward-api-6b9e39cd-a3c2-471d-9b51-13b2cf7593aa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.315189ms Jul 20 00:31:51.122: INFO: Pod "downward-api-6b9e39cd-a3c2-471d-9b51-13b2cf7593aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009658164s Jul 20 00:31:53.135: INFO: Pod "downward-api-6b9e39cd-a3c2-471d-9b51-13b2cf7593aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022842881s STEP: Saw pod success Jul 20 00:31:53.135: INFO: Pod "downward-api-6b9e39cd-a3c2-471d-9b51-13b2cf7593aa" satisfied condition "success or failure" Jul 20 00:31:53.139: INFO: Trying to get logs from node iruya-worker2 pod downward-api-6b9e39cd-a3c2-471d-9b51-13b2cf7593aa container dapi-container: STEP: delete the pod Jul 20 00:31:53.190: INFO: Waiting for pod downward-api-6b9e39cd-a3c2-471d-9b51-13b2cf7593aa to disappear Jul 20 00:31:53.194: INFO: Pod downward-api-6b9e39cd-a3c2-471d-9b51-13b2cf7593aa no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:31:53.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5887" for this suite. Jul 20 00:31:59.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:31:59.293: INFO: namespace downward-api-5887 deletion completed in 6.09618335s • [SLOW TEST:10.285 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:31:59.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 20 00:31:59.376: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbbbe5e7-03f3-4736-a49f-bbfa27e02e53" in namespace "projected-9312" to be "success or failure" Jul 20 00:31:59.398: INFO: Pod "downwardapi-volume-bbbbe5e7-03f3-4736-a49f-bbfa27e02e53": Phase="Pending", Reason="", readiness=false. Elapsed: 21.616823ms Jul 20 00:32:01.402: INFO: Pod "downwardapi-volume-bbbbe5e7-03f3-4736-a49f-bbfa27e02e53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025850823s Jul 20 00:32:03.406: INFO: Pod "downwardapi-volume-bbbbe5e7-03f3-4736-a49f-bbfa27e02e53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029632984s STEP: Saw pod success Jul 20 00:32:03.406: INFO: Pod "downwardapi-volume-bbbbe5e7-03f3-4736-a49f-bbfa27e02e53" satisfied condition "success or failure" Jul 20 00:32:03.409: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bbbbe5e7-03f3-4736-a49f-bbfa27e02e53 container client-container: STEP: delete the pod Jul 20 00:32:03.429: INFO: Waiting for pod downwardapi-volume-bbbbe5e7-03f3-4736-a49f-bbfa27e02e53 to disappear Jul 20 00:32:03.434: INFO: Pod downwardapi-volume-bbbbe5e7-03f3-4736-a49f-bbfa27e02e53 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:32:03.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9312" for this suite. Jul 20 00:32:09.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:32:09.528: INFO: namespace projected-9312 deletion completed in 6.091326303s • [SLOW TEST:10.235 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:32:09.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0720 00:32:21.267660 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 20 00:32:21.267: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:32:21.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4680" for this suite. Jul 20 00:32:29.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:32:29.366: INFO: namespace gc-4680 deletion completed in 8.095058256s • [SLOW TEST:19.838 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:32:29.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0720 00:32:39.473721 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 20 00:32:39.473: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:32:39.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5000" for this suite. Jul 20 00:32:45.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:32:45.569: INFO: namespace gc-5000 deletion completed in 6.092014583s • [SLOW TEST:16.202 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:32:45.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0720 00:32:46.716190 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 20 00:32:46.716: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:32:46.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1377" for this suite. Jul 20 00:32:52.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:32:52.835: INFO: namespace gc-1377 deletion completed in 6.115337278s • [SLOW TEST:7.266 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:32:52.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jul 20 00:32:52.907: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 00:32:52.941: INFO: Waiting for terminating namespaces to be deleted... Jul 20 00:32:52.943: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jul 20 00:32:52.947: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded) Jul 20 00:32:52.947: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 00:32:52.947: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded) Jul 20 00:32:52.947: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 00:32:52.947: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jul 20 00:32:52.951: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded) Jul 20 00:32:52.951: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 00:32:52.951: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded) Jul 20 00:32:52.951: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9f1ea241-69a6-47f7-8d50-597ceb3e0d1c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-9f1ea241-69a6-47f7-8d50-597ceb3e0d1c off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-9f1ea241-69a6-47f7-8d50-597ceb3e0d1c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:33:03.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7585" for this suite. Jul 20 00:33:13.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:33:13.420: INFO: namespace sched-pred-7585 deletion completed in 10.154070973s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:20.585 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:33:13.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2384.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2384.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 00:33:21.652: INFO: DNS probes using dns-2384/dns-test-4f3545a1-e49d-4dce-98d0-01cb09c18136 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:33:21.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2384" for this suite. Jul 20 00:33:27.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:33:28.429: INFO: namespace dns-2384 deletion completed in 6.661880679s • [SLOW TEST:15.009 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:33:28.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 20 00:33:33.589: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:33:33.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5386" for this suite. Jul 20 00:33:39.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:33:39.723: INFO: namespace container-runtime-5386 deletion completed in 6.113751652s • [SLOW TEST:11.294 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:33:39.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-9xdg STEP: Creating a pod to test atomic-volume-subpath Jul 20 00:33:39.932: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9xdg" in namespace "subpath-3917" to be "success or failure" Jul 20 00:33:40.018: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Pending", Reason="", readiness=false. Elapsed: 86.04114ms Jul 20 00:33:42.022: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089910903s Jul 20 00:33:44.027: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Running", Reason="", readiness=true. Elapsed: 4.094614316s Jul 20 00:33:46.031: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Running", Reason="", readiness=true. Elapsed: 6.098835685s Jul 20 00:33:48.035: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Running", Reason="", readiness=true. Elapsed: 8.102967162s Jul 20 00:33:50.039: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Running", Reason="", readiness=true. Elapsed: 10.107364272s Jul 20 00:33:52.044: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Running", Reason="", readiness=true. Elapsed: 12.111626677s Jul 20 00:33:54.048: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Running", Reason="", readiness=true. Elapsed: 14.115967697s Jul 20 00:33:56.054: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Running", Reason="", readiness=true. Elapsed: 16.122006755s Jul 20 00:33:58.058: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Running", Reason="", readiness=true. Elapsed: 18.125914026s Jul 20 00:34:00.062: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Running", Reason="", readiness=true. Elapsed: 20.13034661s Jul 20 00:34:02.067: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Running", Reason="", readiness=true. Elapsed: 22.134967633s Jul 20 00:34:04.071: INFO: Pod "pod-subpath-test-projected-9xdg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.139272721s STEP: Saw pod success Jul 20 00:34:04.071: INFO: Pod "pod-subpath-test-projected-9xdg" satisfied condition "success or failure" Jul 20 00:34:04.074: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-9xdg container test-container-subpath-projected-9xdg: STEP: delete the pod Jul 20 00:34:04.185: INFO: Waiting for pod pod-subpath-test-projected-9xdg to disappear Jul 20 00:34:04.244: INFO: Pod pod-subpath-test-projected-9xdg no longer exists STEP: Deleting pod pod-subpath-test-projected-9xdg Jul 20 00:34:04.244: INFO: Deleting pod "pod-subpath-test-projected-9xdg" in namespace "subpath-3917" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:34:04.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3917" for this suite. Jul 20 00:34:10.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:34:10.358: INFO: namespace subpath-3917 deletion completed in 6.107389348s • [SLOW TEST:30.635 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:34:10.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 20 00:34:10.429: INFO: Waiting up to 5m0s for pod "pod-a05afbd0-bfd5-4041-9c1c-e1a1d29dd95c" in namespace "emptydir-4495" to be "success or failure" Jul 20 00:34:10.477: INFO: Pod "pod-a05afbd0-bfd5-4041-9c1c-e1a1d29dd95c": Phase="Pending", Reason="", readiness=false. Elapsed: 47.575307ms Jul 20 00:34:12.513: INFO: Pod "pod-a05afbd0-bfd5-4041-9c1c-e1a1d29dd95c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083744715s Jul 20 00:34:14.517: INFO: Pod "pod-a05afbd0-bfd5-4041-9c1c-e1a1d29dd95c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08821954s STEP: Saw pod success Jul 20 00:34:14.517: INFO: Pod "pod-a05afbd0-bfd5-4041-9c1c-e1a1d29dd95c" satisfied condition "success or failure" Jul 20 00:34:14.520: INFO: Trying to get logs from node iruya-worker2 pod pod-a05afbd0-bfd5-4041-9c1c-e1a1d29dd95c container test-container: STEP: delete the pod Jul 20 00:34:14.552: INFO: Waiting for pod pod-a05afbd0-bfd5-4041-9c1c-e1a1d29dd95c to disappear Jul 20 00:34:14.570: INFO: Pod pod-a05afbd0-bfd5-4041-9c1c-e1a1d29dd95c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:34:14.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4495" for this suite. Jul 20 00:34:20.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:34:20.690: INFO: namespace emptydir-4495 deletion completed in 6.115910709s • [SLOW TEST:10.332 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:34:20.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-6c3628f0-e920-446d-b89b-f2b21cc86130 STEP: Creating configMap with name cm-test-opt-upd-c8b406a4-e8dc-4828-8bd7-d18a5ece5e46 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6c3628f0-e920-446d-b89b-f2b21cc86130 STEP: Updating configmap cm-test-opt-upd-c8b406a4-e8dc-4828-8bd7-d18a5ece5e46 STEP: Creating configMap with name cm-test-opt-create-cdfaa583-84ce-4f0a-afe6-1b31c6fe4864 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:35:39.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2731" for this suite. Jul 20 00:35:59.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:35:59.373: INFO: namespace configmap-2731 deletion completed in 20.091571097s • [SLOW TEST:98.683 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:35:59.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 20 00:35:59.461: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c2408fe-5900-4e6c-8a08-f5112d18f1fb" in namespace "projected-7124" to be "success or failure" Jul 20 00:35:59.463: INFO: Pod "downwardapi-volume-8c2408fe-5900-4e6c-8a08-f5112d18f1fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.696296ms Jul 20 00:36:01.468: INFO: Pod "downwardapi-volume-8c2408fe-5900-4e6c-8a08-f5112d18f1fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006890517s Jul 20 00:36:03.471: INFO: Pod "downwardapi-volume-8c2408fe-5900-4e6c-8a08-f5112d18f1fb": Phase="Running", Reason="", readiness=true. Elapsed: 4.010606485s Jul 20 00:36:05.521: INFO: Pod "downwardapi-volume-8c2408fe-5900-4e6c-8a08-f5112d18f1fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060135604s STEP: Saw pod success Jul 20 00:36:05.521: INFO: Pod "downwardapi-volume-8c2408fe-5900-4e6c-8a08-f5112d18f1fb" satisfied condition "success or failure" Jul 20 00:36:05.524: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8c2408fe-5900-4e6c-8a08-f5112d18f1fb container client-container: STEP: delete the pod Jul 20 00:36:05.559: INFO: Waiting for pod downwardapi-volume-8c2408fe-5900-4e6c-8a08-f5112d18f1fb to disappear Jul 20 00:36:05.575: INFO: Pod downwardapi-volume-8c2408fe-5900-4e6c-8a08-f5112d18f1fb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:36:05.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7124" for this suite. Jul 20 00:36:11.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:36:11.655: INFO: namespace projected-7124 deletion completed in 6.077213839s • [SLOW TEST:12.281 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:36:11.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jul 20 00:36:11.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8276' Jul 20 00:36:20.788: INFO: stderr: "" Jul 20 00:36:20.788: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 20 00:36:20.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8276' Jul 20 00:36:20.904: INFO: stderr: "" Jul 20 00:36:20.904: INFO: stdout: "update-demo-nautilus-hm766 update-demo-nautilus-vvc4z " Jul 20 00:36:20.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hm766 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8276' Jul 20 00:36:21.024: INFO: stderr: "" Jul 20 00:36:21.024: INFO: stdout: "" Jul 20 00:36:21.024: INFO: update-demo-nautilus-hm766 is created but not running Jul 20 00:36:26.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8276' Jul 20 00:36:26.132: INFO: stderr: "" Jul 20 00:36:26.132: INFO: stdout: "update-demo-nautilus-hm766 update-demo-nautilus-vvc4z " Jul 20 00:36:26.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hm766 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8276' Jul 20 00:36:26.214: INFO: stderr: "" Jul 20 00:36:26.214: INFO: stdout: "true" Jul 20 00:36:26.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hm766 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8276' Jul 20 00:36:26.396: INFO: stderr: "" Jul 20 00:36:26.396: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 20 00:36:26.396: INFO: validating pod update-demo-nautilus-hm766 Jul 20 00:36:26.400: INFO: got data: { "image": "nautilus.jpg" } Jul 20 00:36:26.401: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 20 00:36:26.401: INFO: update-demo-nautilus-hm766 is verified up and running Jul 20 00:36:26.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vvc4z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8276' Jul 20 00:36:26.498: INFO: stderr: "" Jul 20 00:36:26.498: INFO: stdout: "true" Jul 20 00:36:26.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vvc4z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8276' Jul 20 00:36:26.772: INFO: stderr: "" Jul 20 00:36:26.772: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 20 00:36:26.772: INFO: validating pod update-demo-nautilus-vvc4z Jul 20 00:36:26.790: INFO: got data: { "image": "nautilus.jpg" } Jul 20 00:36:26.790: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 20 00:36:26.791: INFO: update-demo-nautilus-vvc4z is verified up and running STEP: rolling-update to new replication controller Jul 20 00:36:26.794: INFO: scanned /root for discovery docs: Jul 20 00:36:26.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8276' Jul 20 00:36:52.253: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 20 00:36:52.253: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 20 00:36:52.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8276' Jul 20 00:36:52.345: INFO: stderr: "" Jul 20 00:36:52.345: INFO: stdout: "update-demo-kitten-4tg6m update-demo-kitten-n8b6h " Jul 20 00:36:52.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4tg6m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8276' Jul 20 00:36:52.445: INFO: stderr: "" Jul 20 00:36:52.445: INFO: stdout: "true" Jul 20 00:36:52.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4tg6m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8276' Jul 20 00:36:52.536: INFO: stderr: "" Jul 20 00:36:52.536: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 20 00:36:52.536: INFO: validating pod update-demo-kitten-4tg6m Jul 20 00:36:52.540: INFO: got data: { "image": "kitten.jpg" } Jul 20 00:36:52.540: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 20 00:36:52.540: INFO: update-demo-kitten-4tg6m is verified up and running Jul 20 00:36:52.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n8b6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8276' Jul 20 00:36:52.629: INFO: stderr: "" Jul 20 00:36:52.629: INFO: stdout: "true" Jul 20 00:36:52.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n8b6h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8276' Jul 20 00:36:52.725: INFO: stderr: "" Jul 20 00:36:52.725: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 20 00:36:52.725: INFO: validating pod update-demo-kitten-n8b6h Jul 20 00:36:52.729: INFO: got data: { "image": "kitten.jpg" } Jul 20 00:36:52.729: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 20 00:36:52.729: INFO: update-demo-kitten-n8b6h is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 20 00:36:52.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8276" for this suite. Jul 20 00:37:14.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 20 00:37:14.926: INFO: namespace kubectl-8276 deletion completed in 22.194777864s • [SLOW TEST:63.271 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 20 00:37:14.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 20 00:37:15.019: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 20 00:37:21.470: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3383357-d4af-4796-8bd6-403660d9faea" in namespace "projected-7087" to be "success or failure"
Jul 20 00:37:21.474: INFO: Pod "downwardapi-volume-f3383357-d4af-4796-8bd6-403660d9faea": Phase="Pending", Reason="", readiness=false. Elapsed: 3.919892ms
Jul 20 00:37:23.486: INFO: Pod "downwardapi-volume-f3383357-d4af-4796-8bd6-403660d9faea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015465357s
Jul 20 00:37:25.490: INFO: Pod "downwardapi-volume-f3383357-d4af-4796-8bd6-403660d9faea": Phase="Running", Reason="", readiness=true. Elapsed: 4.019830733s
Jul 20 00:37:27.494: INFO: Pod "downwardapi-volume-f3383357-d4af-4796-8bd6-403660d9faea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02374071s
STEP: Saw pod success
Jul 20 00:37:27.494: INFO: Pod "downwardapi-volume-f3383357-d4af-4796-8bd6-403660d9faea" satisfied condition "success or failure"
Jul 20 00:37:27.497: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f3383357-d4af-4796-8bd6-403660d9faea container client-container: 
STEP: delete the pod
Jul 20 00:37:27.524: INFO: Waiting for pod downwardapi-volume-f3383357-d4af-4796-8bd6-403660d9faea to disappear
Jul 20 00:37:27.563: INFO: Pod downwardapi-volume-f3383357-d4af-4796-8bd6-403660d9faea no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:37:27.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7087" for this suite.
Jul 20 00:37:33.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:37:33.670: INFO: namespace projected-7087 deletion completed in 6.103005374s

• [SLOW TEST:12.283 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:37:33.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7622
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 20 00:37:33.729: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 20 00:37:56.329: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.90:8080/dial?request=hostName&protocol=http&host=10.244.1.112&port=8080&tries=1'] Namespace:pod-network-test-7622 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 00:37:56.329: INFO: >>> kubeConfig: /root/.kube/config
I0720 00:37:56.362296       6 log.go:172] (0xc000feef20) (0xc002405f40) Create stream
I0720 00:37:56.362337       6 log.go:172] (0xc000feef20) (0xc002405f40) Stream added, broadcasting: 1
I0720 00:37:56.364676       6 log.go:172] (0xc000feef20) Reply frame received for 1
I0720 00:37:56.364804       6 log.go:172] (0xc000feef20) (0xc002240820) Create stream
I0720 00:37:56.364822       6 log.go:172] (0xc000feef20) (0xc002240820) Stream added, broadcasting: 3
I0720 00:37:56.365887       6 log.go:172] (0xc000feef20) Reply frame received for 3
I0720 00:37:56.365925       6 log.go:172] (0xc000feef20) (0xc001a0e000) Create stream
I0720 00:37:56.365940       6 log.go:172] (0xc000feef20) (0xc001a0e000) Stream added, broadcasting: 5
I0720 00:37:56.366806       6 log.go:172] (0xc000feef20) Reply frame received for 5
I0720 00:37:56.448377       6 log.go:172] (0xc000feef20) Data frame received for 3
I0720 00:37:56.448436       6 log.go:172] (0xc002240820) (3) Data frame handling
I0720 00:37:56.448476       6 log.go:172] (0xc002240820) (3) Data frame sent
I0720 00:37:56.449213       6 log.go:172] (0xc000feef20) Data frame received for 3
I0720 00:37:56.449228       6 log.go:172] (0xc002240820) (3) Data frame handling
I0720 00:37:56.449275       6 log.go:172] (0xc000feef20) Data frame received for 5
I0720 00:37:56.449300       6 log.go:172] (0xc001a0e000) (5) Data frame handling
I0720 00:37:56.451178       6 log.go:172] (0xc000feef20) Data frame received for 1
I0720 00:37:56.451196       6 log.go:172] (0xc002405f40) (1) Data frame handling
I0720 00:37:56.451210       6 log.go:172] (0xc002405f40) (1) Data frame sent
I0720 00:37:56.451229       6 log.go:172] (0xc000feef20) (0xc002405f40) Stream removed, broadcasting: 1
I0720 00:37:56.451313       6 log.go:172] (0xc000feef20) Go away received
I0720 00:37:56.451403       6 log.go:172] (0xc000feef20) (0xc002405f40) Stream removed, broadcasting: 1
I0720 00:37:56.451447       6 log.go:172] (0xc000feef20) (0xc002240820) Stream removed, broadcasting: 3
I0720 00:37:56.451467       6 log.go:172] (0xc000feef20) (0xc001a0e000) Stream removed, broadcasting: 5
Jul 20 00:37:56.451: INFO: Waiting for endpoints: map[]
Jul 20 00:37:56.455: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.90:8080/dial?request=hostName&protocol=http&host=10.244.2.89&port=8080&tries=1'] Namespace:pod-network-test-7622 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 00:37:56.455: INFO: >>> kubeConfig: /root/.kube/config
I0720 00:37:56.487509       6 log.go:172] (0xc0026e51e0) (0xc0016610e0) Create stream
I0720 00:37:56.487537       6 log.go:172] (0xc0026e51e0) (0xc0016610e0) Stream added, broadcasting: 1
I0720 00:37:56.490700       6 log.go:172] (0xc0026e51e0) Reply frame received for 1
I0720 00:37:56.490761       6 log.go:172] (0xc0026e51e0) (0xc001661180) Create stream
I0720 00:37:56.490775       6 log.go:172] (0xc0026e51e0) (0xc001661180) Stream added, broadcasting: 3
I0720 00:37:56.491952       6 log.go:172] (0xc0026e51e0) Reply frame received for 3
I0720 00:37:56.491999       6 log.go:172] (0xc0026e51e0) (0xc002240a00) Create stream
I0720 00:37:56.492014       6 log.go:172] (0xc0026e51e0) (0xc002240a00) Stream added, broadcasting: 5
I0720 00:37:56.493165       6 log.go:172] (0xc0026e51e0) Reply frame received for 5
I0720 00:37:56.561751       6 log.go:172] (0xc0026e51e0) Data frame received for 3
I0720 00:37:56.561778       6 log.go:172] (0xc001661180) (3) Data frame handling
I0720 00:37:56.561792       6 log.go:172] (0xc001661180) (3) Data frame sent
I0720 00:37:56.562476       6 log.go:172] (0xc0026e51e0) Data frame received for 3
I0720 00:37:56.562522       6 log.go:172] (0xc001661180) (3) Data frame handling
I0720 00:37:56.562547       6 log.go:172] (0xc0026e51e0) Data frame received for 5
I0720 00:37:56.562565       6 log.go:172] (0xc002240a00) (5) Data frame handling
I0720 00:37:56.563831       6 log.go:172] (0xc0026e51e0) Data frame received for 1
I0720 00:37:56.563857       6 log.go:172] (0xc0016610e0) (1) Data frame handling
I0720 00:37:56.563885       6 log.go:172] (0xc0016610e0) (1) Data frame sent
I0720 00:37:56.563901       6 log.go:172] (0xc0026e51e0) (0xc0016610e0) Stream removed, broadcasting: 1
I0720 00:37:56.563964       6 log.go:172] (0xc0026e51e0) Go away received
I0720 00:37:56.563995       6 log.go:172] (0xc0026e51e0) (0xc0016610e0) Stream removed, broadcasting: 1
I0720 00:37:56.564008       6 log.go:172] (0xc0026e51e0) (0xc001661180) Stream removed, broadcasting: 3
I0720 00:37:56.564022       6 log.go:172] (0xc0026e51e0) (0xc002240a00) Stream removed, broadcasting: 5
Jul 20 00:37:56.564: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:37:56.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7622" for this suite.
Jul 20 00:38:20.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:38:20.691: INFO: namespace pod-network-test-7622 deletion completed in 24.110668795s

• [SLOW TEST:47.020 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:38:20.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-af3736a5-512c-4ff5-a325-1bdd493b2517
STEP: Creating a pod to test consume secrets
Jul 20 00:38:20.771: INFO: Waiting up to 5m0s for pod "pod-secrets-63fede0e-8b56-403e-afff-bf044cd22656" in namespace "secrets-7388" to be "success or failure"
Jul 20 00:38:20.793: INFO: Pod "pod-secrets-63fede0e-8b56-403e-afff-bf044cd22656": Phase="Pending", Reason="", readiness=false. Elapsed: 22.145494ms
Jul 20 00:38:23.139: INFO: Pod "pod-secrets-63fede0e-8b56-403e-afff-bf044cd22656": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36791839s
Jul 20 00:38:25.143: INFO: Pod "pod-secrets-63fede0e-8b56-403e-afff-bf044cd22656": Phase="Running", Reason="", readiness=true. Elapsed: 4.372499761s
Jul 20 00:38:27.148: INFO: Pod "pod-secrets-63fede0e-8b56-403e-afff-bf044cd22656": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.376915442s
STEP: Saw pod success
Jul 20 00:38:27.148: INFO: Pod "pod-secrets-63fede0e-8b56-403e-afff-bf044cd22656" satisfied condition "success or failure"
Jul 20 00:38:27.152: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-63fede0e-8b56-403e-afff-bf044cd22656 container secret-volume-test: 
STEP: delete the pod
Jul 20 00:38:27.176: INFO: Waiting for pod pod-secrets-63fede0e-8b56-403e-afff-bf044cd22656 to disappear
Jul 20 00:38:27.180: INFO: Pod pod-secrets-63fede0e-8b56-403e-afff-bf044cd22656 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:38:27.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7388" for this suite.
Jul 20 00:38:33.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:38:33.355: INFO: namespace secrets-7388 deletion completed in 6.171978503s

• [SLOW TEST:12.663 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:38:33.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-2f9bbf71-fb40-448d-a8ef-af0a7dfb06cf
STEP: Creating a pod to test consume secrets
Jul 20 00:38:33.812: INFO: Waiting up to 5m0s for pod "pod-secrets-fe04a6a2-2557-41c5-a6f2-b4c4d3184c85" in namespace "secrets-5113" to be "success or failure"
Jul 20 00:38:33.894: INFO: Pod "pod-secrets-fe04a6a2-2557-41c5-a6f2-b4c4d3184c85": Phase="Pending", Reason="", readiness=false. Elapsed: 81.107167ms
Jul 20 00:38:35.898: INFO: Pod "pod-secrets-fe04a6a2-2557-41c5-a6f2-b4c4d3184c85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085366267s
Jul 20 00:38:37.902: INFO: Pod "pod-secrets-fe04a6a2-2557-41c5-a6f2-b4c4d3184c85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089471407s
STEP: Saw pod success
Jul 20 00:38:37.902: INFO: Pod "pod-secrets-fe04a6a2-2557-41c5-a6f2-b4c4d3184c85" satisfied condition "success or failure"
Jul 20 00:38:37.905: INFO: Trying to get logs from node iruya-worker pod pod-secrets-fe04a6a2-2557-41c5-a6f2-b4c4d3184c85 container secret-volume-test: 
STEP: delete the pod
Jul 20 00:38:37.925: INFO: Waiting for pod pod-secrets-fe04a6a2-2557-41c5-a6f2-b4c4d3184c85 to disappear
Jul 20 00:38:37.929: INFO: Pod pod-secrets-fe04a6a2-2557-41c5-a6f2-b4c4d3184c85 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:38:37.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5113" for this suite.
Jul 20 00:38:43.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:38:44.024: INFO: namespace secrets-5113 deletion completed in 6.091174066s

• [SLOW TEST:10.669 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:38:44.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jul 20 00:38:44.084: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 20 00:38:44.114: INFO: Waiting for terminating namespaces to be deleted...
Jul 20 00:38:44.117: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Jul 20 00:38:44.121: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Jul 20 00:38:44.121: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 20 00:38:44.121: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Jul 20 00:38:44.121: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 20 00:38:44.121: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Jul 20 00:38:44.128: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded)
Jul 20 00:38:44.128: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 20 00:38:44.128: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded)
Jul 20 00:38:44.128: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.16234ef8f22e6dbd], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:38:45.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7662" for this suite.
Jul 20 00:38:51.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:38:51.254: INFO: namespace sched-pred-7662 deletion completed in 6.104391035s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.230 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:38:51.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jul 20 00:38:51.388: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:38:57.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8941" for this suite.
Jul 20 00:39:03.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:39:03.602: INFO: namespace init-container-8941 deletion completed in 6.100512051s

• [SLOW TEST:12.347 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:39:03.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1412
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 20 00:39:03.646: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 20 00:39:27.809: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.114:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1412 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 00:39:27.809: INFO: >>> kubeConfig: /root/.kube/config
I0720 00:39:27.841236       6 log.go:172] (0xc0025a6c60) (0xc001a4d5e0) Create stream
I0720 00:39:27.841277       6 log.go:172] (0xc0025a6c60) (0xc001a4d5e0) Stream added, broadcasting: 1
I0720 00:39:27.843382       6 log.go:172] (0xc0025a6c60) Reply frame received for 1
I0720 00:39:27.843438       6 log.go:172] (0xc0025a6c60) (0xc001a4d680) Create stream
I0720 00:39:27.843453       6 log.go:172] (0xc0025a6c60) (0xc001a4d680) Stream added, broadcasting: 3
I0720 00:39:27.844332       6 log.go:172] (0xc0025a6c60) Reply frame received for 3
I0720 00:39:27.844373       6 log.go:172] (0xc0025a6c60) (0xc001a4d7c0) Create stream
I0720 00:39:27.844386       6 log.go:172] (0xc0025a6c60) (0xc001a4d7c0) Stream added, broadcasting: 5
I0720 00:39:27.845273       6 log.go:172] (0xc0025a6c60) Reply frame received for 5
I0720 00:39:27.907638       6 log.go:172] (0xc0025a6c60) Data frame received for 3
I0720 00:39:27.907672       6 log.go:172] (0xc001a4d680) (3) Data frame handling
I0720 00:39:27.907703       6 log.go:172] (0xc001a4d680) (3) Data frame sent
I0720 00:39:27.907721       6 log.go:172] (0xc0025a6c60) Data frame received for 3
I0720 00:39:27.907733       6 log.go:172] (0xc001a4d680) (3) Data frame handling
I0720 00:39:27.907765       6 log.go:172] (0xc0025a6c60) Data frame received for 5
I0720 00:39:27.907778       6 log.go:172] (0xc001a4d7c0) (5) Data frame handling
I0720 00:39:27.909651       6 log.go:172] (0xc0025a6c60) Data frame received for 1
I0720 00:39:27.909680       6 log.go:172] (0xc001a4d5e0) (1) Data frame handling
I0720 00:39:27.909698       6 log.go:172] (0xc001a4d5e0) (1) Data frame sent
I0720 00:39:27.909712       6 log.go:172] (0xc0025a6c60) (0xc001a4d5e0) Stream removed, broadcasting: 1
I0720 00:39:27.909803       6 log.go:172] (0xc0025a6c60) (0xc001a4d5e0) Stream removed, broadcasting: 1
I0720 00:39:27.909817       6 log.go:172] (0xc0025a6c60) (0xc001a4d680) Stream removed, broadcasting: 3
I0720 00:39:27.909832       6 log.go:172] (0xc0025a6c60) (0xc001a4d7c0) Stream removed, broadcasting: 5
I0720 00:39:27.909854       6 log.go:172] (0xc0025a6c60) Go away received
Jul 20 00:39:27.909: INFO: Found all expected endpoints: [netserver-0]
Jul 20 00:39:27.913: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.93:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1412 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 00:39:27.913: INFO: >>> kubeConfig: /root/.kube/config
I0720 00:39:27.947074       6 log.go:172] (0xc000997b80) (0xc001ee4460) Create stream
I0720 00:39:27.947107       6 log.go:172] (0xc000997b80) (0xc001ee4460) Stream added, broadcasting: 1
I0720 00:39:27.952245       6 log.go:172] (0xc000997b80) Reply frame received for 1
I0720 00:39:27.952294       6 log.go:172] (0xc000997b80) (0xc001a4d860) Create stream
I0720 00:39:27.952312       6 log.go:172] (0xc000997b80) (0xc001a4d860) Stream added, broadcasting: 3
I0720 00:39:27.953603       6 log.go:172] (0xc000997b80) Reply frame received for 3
I0720 00:39:27.953660       6 log.go:172] (0xc000997b80) (0xc001a4dae0) Create stream
I0720 00:39:27.953680       6 log.go:172] (0xc000997b80) (0xc001a4dae0) Stream added, broadcasting: 5
I0720 00:39:27.954610       6 log.go:172] (0xc000997b80) Reply frame received for 5
I0720 00:39:28.030313       6 log.go:172] (0xc000997b80) Data frame received for 3
I0720 00:39:28.030349       6 log.go:172] (0xc001a4d860) (3) Data frame handling
I0720 00:39:28.030371       6 log.go:172] (0xc001a4d860) (3) Data frame sent
I0720 00:39:28.030505       6 log.go:172] (0xc000997b80) Data frame received for 3
I0720 00:39:28.030526       6 log.go:172] (0xc001a4d860) (3) Data frame handling
I0720 00:39:28.031013       6 log.go:172] (0xc000997b80) Data frame received for 5
I0720 00:39:28.031028       6 log.go:172] (0xc001a4dae0) (5) Data frame handling
I0720 00:39:28.032129       6 log.go:172] (0xc000997b80) Data frame received for 1
I0720 00:39:28.032152       6 log.go:172] (0xc001ee4460) (1) Data frame handling
I0720 00:39:28.032166       6 log.go:172] (0xc001ee4460) (1) Data frame sent
I0720 00:39:28.032184       6 log.go:172] (0xc000997b80) (0xc001ee4460) Stream removed, broadcasting: 1
I0720 00:39:28.032238       6 log.go:172] (0xc000997b80) Go away received
I0720 00:39:28.032322       6 log.go:172] (0xc000997b80) (0xc001ee4460) Stream removed, broadcasting: 1
I0720 00:39:28.032359       6 log.go:172] (0xc000997b80) (0xc001a4d860) Stream removed, broadcasting: 3
I0720 00:39:28.032372       6 log.go:172] (0xc000997b80) (0xc001a4dae0) Stream removed, broadcasting: 5
Jul 20 00:39:28.032: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:39:28.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1412" for this suite.
Jul 20 00:39:52.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:39:52.126: INFO: namespace pod-network-test-1412 deletion completed in 24.089985262s

• [SLOW TEST:48.524 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:39:52.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jul 20 00:39:52.192: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jul 20 00:40:01.266: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:40:01.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3593" for this suite.
Jul 20 00:40:07.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:40:07.363: INFO: namespace pods-3593 deletion completed in 6.089094512s

• [SLOW TEST:15.236 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:40:07.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-9d71437f-ba74-417d-ad8d-51235f1d616f
STEP: Creating a pod to test consume configMaps
Jul 20 00:40:07.451: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb8e56fc-67c9-4805-9687-2927d02c782f" in namespace "configmap-2345" to be "success or failure"
Jul 20 00:40:07.477: INFO: Pod "pod-configmaps-fb8e56fc-67c9-4805-9687-2927d02c782f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.173184ms
Jul 20 00:40:09.481: INFO: Pod "pod-configmaps-fb8e56fc-67c9-4805-9687-2927d02c782f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029829018s
Jul 20 00:40:11.485: INFO: Pod "pod-configmaps-fb8e56fc-67c9-4805-9687-2927d02c782f": Phase="Running", Reason="", readiness=true. Elapsed: 4.034017942s
Jul 20 00:40:13.490: INFO: Pod "pod-configmaps-fb8e56fc-67c9-4805-9687-2927d02c782f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03819219s
STEP: Saw pod success
Jul 20 00:40:13.490: INFO: Pod "pod-configmaps-fb8e56fc-67c9-4805-9687-2927d02c782f" satisfied condition "success or failure"
Jul 20 00:40:13.492: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-fb8e56fc-67c9-4805-9687-2927d02c782f container configmap-volume-test: 
STEP: delete the pod
Jul 20 00:40:13.531: INFO: Waiting for pod pod-configmaps-fb8e56fc-67c9-4805-9687-2927d02c782f to disappear
Jul 20 00:40:13.555: INFO: Pod pod-configmaps-fb8e56fc-67c9-4805-9687-2927d02c782f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:40:13.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2345" for this suite.
Jul 20 00:40:19.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:40:19.648: INFO: namespace configmap-2345 deletion completed in 6.088755363s

• [SLOW TEST:12.285 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:40:19.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jul 20 00:40:19.725: INFO: Waiting up to 5m0s for pod "var-expansion-73d15447-bff5-4349-8ef3-51bbc36d6bcc" in namespace "var-expansion-9482" to be "success or failure"
Jul 20 00:40:19.741: INFO: Pod "var-expansion-73d15447-bff5-4349-8ef3-51bbc36d6bcc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.342882ms
Jul 20 00:40:21.745: INFO: Pod "var-expansion-73d15447-bff5-4349-8ef3-51bbc36d6bcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02047751s
Jul 20 00:40:23.749: INFO: Pod "var-expansion-73d15447-bff5-4349-8ef3-51bbc36d6bcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024386441s
STEP: Saw pod success
Jul 20 00:40:23.749: INFO: Pod "var-expansion-73d15447-bff5-4349-8ef3-51bbc36d6bcc" satisfied condition "success or failure"
Jul 20 00:40:23.752: INFO: Trying to get logs from node iruya-worker pod var-expansion-73d15447-bff5-4349-8ef3-51bbc36d6bcc container dapi-container: 
STEP: delete the pod
Jul 20 00:40:23.795: INFO: Waiting for pod var-expansion-73d15447-bff5-4349-8ef3-51bbc36d6bcc to disappear
Jul 20 00:40:23.801: INFO: Pod var-expansion-73d15447-bff5-4349-8ef3-51bbc36d6bcc no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:40:23.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9482" for this suite.
Jul 20 00:40:29.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:40:29.917: INFO: namespace var-expansion-9482 deletion completed in 6.112955071s

• [SLOW TEST:10.268 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:40:29.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jul 20 00:40:30.001: INFO: Waiting up to 5m0s for pod "client-containers-214632e9-220f-43f4-9ccf-a028246fd27f" in namespace "containers-1617" to be "success or failure"
Jul 20 00:40:30.004: INFO: Pod "client-containers-214632e9-220f-43f4-9ccf-a028246fd27f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.966284ms
Jul 20 00:40:32.063: INFO: Pod "client-containers-214632e9-220f-43f4-9ccf-a028246fd27f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061731613s
Jul 20 00:40:34.081: INFO: Pod "client-containers-214632e9-220f-43f4-9ccf-a028246fd27f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079859766s
STEP: Saw pod success
Jul 20 00:40:34.081: INFO: Pod "client-containers-214632e9-220f-43f4-9ccf-a028246fd27f" satisfied condition "success or failure"
Jul 20 00:40:34.084: INFO: Trying to get logs from node iruya-worker2 pod client-containers-214632e9-220f-43f4-9ccf-a028246fd27f container test-container: 
STEP: delete the pod
Jul 20 00:40:34.143: INFO: Waiting for pod client-containers-214632e9-220f-43f4-9ccf-a028246fd27f to disappear
Jul 20 00:40:34.231: INFO: Pod client-containers-214632e9-220f-43f4-9ccf-a028246fd27f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:40:34.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1617" for this suite.
Jul 20 00:40:40.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:40:40.319: INFO: namespace containers-1617 deletion completed in 6.084134572s

• [SLOW TEST:10.402 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:40:40.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 20 00:40:40.431: INFO: Waiting up to 5m0s for pod "pod-902e5a66-7854-4169-be0e-4b7a6a9d9dba" in namespace "emptydir-7908" to be "success or failure"
Jul 20 00:40:40.465: INFO: Pod "pod-902e5a66-7854-4169-be0e-4b7a6a9d9dba": Phase="Pending", Reason="", readiness=false. Elapsed: 33.419197ms
Jul 20 00:40:42.469: INFO: Pod "pod-902e5a66-7854-4169-be0e-4b7a6a9d9dba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037779253s
Jul 20 00:40:44.474: INFO: Pod "pod-902e5a66-7854-4169-be0e-4b7a6a9d9dba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042428535s
STEP: Saw pod success
Jul 20 00:40:44.474: INFO: Pod "pod-902e5a66-7854-4169-be0e-4b7a6a9d9dba" satisfied condition "success or failure"
Jul 20 00:40:44.477: INFO: Trying to get logs from node iruya-worker2 pod pod-902e5a66-7854-4169-be0e-4b7a6a9d9dba container test-container: 
STEP: delete the pod
Jul 20 00:40:44.515: INFO: Waiting for pod pod-902e5a66-7854-4169-be0e-4b7a6a9d9dba to disappear
Jul 20 00:40:44.531: INFO: Pod pod-902e5a66-7854-4169-be0e-4b7a6a9d9dba no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:40:44.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7908" for this suite.
Jul 20 00:40:50.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:40:50.623: INFO: namespace emptydir-7908 deletion completed in 6.088742993s

• [SLOW TEST:10.303 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:40:50.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 20 00:40:54.883: INFO: Waiting up to 5m0s for pod "client-envvars-38c7064f-aed0-4b53-acd8-ec9457e3ad66" in namespace "pods-7564" to be "success or failure"
Jul 20 00:40:54.904: INFO: Pod "client-envvars-38c7064f-aed0-4b53-acd8-ec9457e3ad66": Phase="Pending", Reason="", readiness=false. Elapsed: 20.343262ms
Jul 20 00:40:56.908: INFO: Pod "client-envvars-38c7064f-aed0-4b53-acd8-ec9457e3ad66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024117652s
Jul 20 00:40:58.912: INFO: Pod "client-envvars-38c7064f-aed0-4b53-acd8-ec9457e3ad66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028205411s
STEP: Saw pod success
Jul 20 00:40:58.912: INFO: Pod "client-envvars-38c7064f-aed0-4b53-acd8-ec9457e3ad66" satisfied condition "success or failure"
Jul 20 00:40:58.914: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-38c7064f-aed0-4b53-acd8-ec9457e3ad66 container env3cont: 
STEP: delete the pod
Jul 20 00:40:59.078: INFO: Waiting for pod client-envvars-38c7064f-aed0-4b53-acd8-ec9457e3ad66 to disappear
Jul 20 00:40:59.094: INFO: Pod client-envvars-38c7064f-aed0-4b53-acd8-ec9457e3ad66 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:40:59.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7564" for this suite.
Jul 20 00:41:37.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:41:37.209: INFO: namespace pods-7564 deletion completed in 38.111727246s

• [SLOW TEST:46.586 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:41:37.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:41:42.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8599" for this suite.
Jul 20 00:42:04.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:42:04.465: INFO: namespace replication-controller-8599 deletion completed in 22.091593539s

• [SLOW TEST:27.256 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:42:04.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul 20 00:42:12.623: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:12.647: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:14.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:14.651: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:16.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:16.652: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:18.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:18.651: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:20.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:20.652: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:22.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:22.652: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:24.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:24.651: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:26.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:26.652: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:28.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:28.652: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:30.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:30.651: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:32.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:32.651: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:34.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:34.651: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 00:42:36.647: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 00:42:36.659: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:42:36.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7171" for this suite.
Jul 20 00:42:58.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:42:58.758: INFO: namespace container-lifecycle-hook-7171 deletion completed in 22.084038006s

• [SLOW TEST:54.293 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:42:58.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 20 00:42:58.868: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"346dcc23-179c-4049-84e4-551875164f55", Controller:(*bool)(0xc002583a8a), BlockOwnerDeletion:(*bool)(0xc002583a8b)}}
Jul 20 00:42:58.875: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5a4506e9-6611-4a5d-a937-8d14ec934f2e", Controller:(*bool)(0xc003555de6), BlockOwnerDeletion:(*bool)(0xc003555de7)}}
Jul 20 00:42:58.899: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b0660f61-66ef-4834-ae81-717ec6efe343", Controller:(*bool)(0xc0030e8376), BlockOwnerDeletion:(*bool)(0xc0030e8377)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:43:03.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-118" for this suite.
Jul 20 00:43:09.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:43:10.034: INFO: namespace gc-118 deletion completed in 6.119470196s

• [SLOW TEST:11.276 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:43:10.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jul 20 00:43:10.123: INFO: Waiting up to 5m0s for pod "downward-api-51c00d1c-049c-44d0-bbce-1cf581f30a86" in namespace "downward-api-983" to be "success or failure"
Jul 20 00:43:10.134: INFO: Pod "downward-api-51c00d1c-049c-44d0-bbce-1cf581f30a86": Phase="Pending", Reason="", readiness=false. Elapsed: 10.785451ms
Jul 20 00:43:12.138: INFO: Pod "downward-api-51c00d1c-049c-44d0-bbce-1cf581f30a86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015009275s
Jul 20 00:43:14.142: INFO: Pod "downward-api-51c00d1c-049c-44d0-bbce-1cf581f30a86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018775897s
STEP: Saw pod success
Jul 20 00:43:14.142: INFO: Pod "downward-api-51c00d1c-049c-44d0-bbce-1cf581f30a86" satisfied condition "success or failure"
Jul 20 00:43:14.145: INFO: Trying to get logs from node iruya-worker pod downward-api-51c00d1c-049c-44d0-bbce-1cf581f30a86 container dapi-container: 
STEP: delete the pod
Jul 20 00:43:14.230: INFO: Waiting for pod downward-api-51c00d1c-049c-44d0-bbce-1cf581f30a86 to disappear
Jul 20 00:43:14.323: INFO: Pod downward-api-51c00d1c-049c-44d0-bbce-1cf581f30a86 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:43:14.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-983" for this suite.
Jul 20 00:43:20.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:43:20.418: INFO: namespace downward-api-983 deletion completed in 6.090622678s

• [SLOW TEST:10.383 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:43:20.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 20 00:43:20.475: INFO: Waiting up to 5m0s for pod "pod-41407d5b-4b86-411a-a85a-a688197f9854" in namespace "emptydir-2509" to be "success or failure"
Jul 20 00:43:20.493: INFO: Pod "pod-41407d5b-4b86-411a-a85a-a688197f9854": Phase="Pending", Reason="", readiness=false. Elapsed: 17.483222ms
Jul 20 00:43:22.497: INFO: Pod "pod-41407d5b-4b86-411a-a85a-a688197f9854": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021299722s
Jul 20 00:43:24.501: INFO: Pod "pod-41407d5b-4b86-411a-a85a-a688197f9854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025409119s
STEP: Saw pod success
Jul 20 00:43:24.501: INFO: Pod "pod-41407d5b-4b86-411a-a85a-a688197f9854" satisfied condition "success or failure"
Jul 20 00:43:24.504: INFO: Trying to get logs from node iruya-worker2 pod pod-41407d5b-4b86-411a-a85a-a688197f9854 container test-container: 
STEP: delete the pod
Jul 20 00:43:24.530: INFO: Waiting for pod pod-41407d5b-4b86-411a-a85a-a688197f9854 to disappear
Jul 20 00:43:24.534: INFO: Pod pod-41407d5b-4b86-411a-a85a-a688197f9854 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:43:24.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2509" for this suite.
Jul 20 00:43:30.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:43:30.734: INFO: namespace emptydir-2509 deletion completed in 6.196954729s

• [SLOW TEST:10.316 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:43:30.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-a8923305-a1e0-4449-93ca-decd1f2a4f73 in namespace container-probe-4785
Jul 20 00:43:34.801: INFO: Started pod liveness-a8923305-a1e0-4449-93ca-decd1f2a4f73 in namespace container-probe-4785
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 00:43:34.804: INFO: Initial restart count of pod liveness-a8923305-a1e0-4449-93ca-decd1f2a4f73 is 0
Jul 20 00:43:55.233: INFO: Restart count of pod container-probe-4785/liveness-a8923305-a1e0-4449-93ca-decd1f2a4f73 is now 1 (20.429795086s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:43:55.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4785" for this suite.
Jul 20 00:44:01.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:44:01.479: INFO: namespace container-probe-4785 deletion completed in 6.170618364s

• [SLOW TEST:30.744 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:44:01.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-988
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-988
STEP: Deleting pre-stop pod
Jul 20 00:44:20.742: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:44:20.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-988" for this suite.
Jul 20 00:45:03.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:45:03.201: INFO: namespace prestop-988 deletion completed in 42.325553558s

• [SLOW TEST:61.720 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:45:03.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jul 20 00:45:03.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8174'
Jul 20 00:45:03.502: INFO: stderr: ""
Jul 20 00:45:03.502: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 00:45:03.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8174'
Jul 20 00:45:03.629: INFO: stderr: ""
Jul 20 00:45:03.629: INFO: stdout: "update-demo-nautilus-lb59m update-demo-nautilus-rgwxp "
Jul 20 00:45:03.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lb59m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:03.739: INFO: stderr: ""
Jul 20 00:45:03.739: INFO: stdout: ""
Jul 20 00:45:03.739: INFO: update-demo-nautilus-lb59m is created but not running
Jul 20 00:45:08.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8174'
Jul 20 00:45:08.838: INFO: stderr: ""
Jul 20 00:45:08.838: INFO: stdout: "update-demo-nautilus-lb59m update-demo-nautilus-rgwxp "
Jul 20 00:45:08.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lb59m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:08.943: INFO: stderr: ""
Jul 20 00:45:08.943: INFO: stdout: "true"
Jul 20 00:45:08.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lb59m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:09.026: INFO: stderr: ""
Jul 20 00:45:09.026: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 00:45:09.026: INFO: validating pod update-demo-nautilus-lb59m
Jul 20 00:45:09.030: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 00:45:09.030: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 00:45:09.030: INFO: update-demo-nautilus-lb59m is verified up and running
Jul 20 00:45:09.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rgwxp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:09.120: INFO: stderr: ""
Jul 20 00:45:09.120: INFO: stdout: "true"
Jul 20 00:45:09.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rgwxp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:09.204: INFO: stderr: ""
Jul 20 00:45:09.204: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 00:45:09.204: INFO: validating pod update-demo-nautilus-rgwxp
Jul 20 00:45:09.208: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 00:45:09.208: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 00:45:09.208: INFO: update-demo-nautilus-rgwxp is verified up and running
STEP: scaling down the replication controller
Jul 20 00:45:09.210: INFO: scanned /root for discovery docs: 
Jul 20 00:45:09.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8174'
Jul 20 00:45:10.351: INFO: stderr: ""
Jul 20 00:45:10.351: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 00:45:10.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8174'
Jul 20 00:45:10.443: INFO: stderr: ""
Jul 20 00:45:10.443: INFO: stdout: "update-demo-nautilus-lb59m update-demo-nautilus-rgwxp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul 20 00:45:15.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8174'
Jul 20 00:45:15.567: INFO: stderr: ""
Jul 20 00:45:15.567: INFO: stdout: "update-demo-nautilus-lb59m "
Jul 20 00:45:15.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lb59m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:15.678: INFO: stderr: ""
Jul 20 00:45:15.678: INFO: stdout: "true"
Jul 20 00:45:15.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lb59m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:15.767: INFO: stderr: ""
Jul 20 00:45:15.767: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 00:45:15.767: INFO: validating pod update-demo-nautilus-lb59m
Jul 20 00:45:15.770: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 00:45:15.770: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 00:45:15.770: INFO: update-demo-nautilus-lb59m is verified up and running
STEP: scaling up the replication controller
Jul 20 00:45:15.772: INFO: scanned /root for discovery docs: 
Jul 20 00:45:15.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8174'
Jul 20 00:45:16.901: INFO: stderr: ""
Jul 20 00:45:16.901: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 00:45:16.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8174'
Jul 20 00:45:16.995: INFO: stderr: ""
Jul 20 00:45:16.995: INFO: stdout: "update-demo-nautilus-lb59m update-demo-nautilus-lrfgd "
Jul 20 00:45:16.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lb59m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:17.077: INFO: stderr: ""
Jul 20 00:45:17.077: INFO: stdout: "true"
Jul 20 00:45:17.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lb59m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:17.164: INFO: stderr: ""
Jul 20 00:45:17.164: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 00:45:17.164: INFO: validating pod update-demo-nautilus-lb59m
Jul 20 00:45:17.168: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 00:45:17.168: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 00:45:17.168: INFO: update-demo-nautilus-lb59m is verified up and running
Jul 20 00:45:17.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrfgd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:17.331: INFO: stderr: ""
Jul 20 00:45:17.331: INFO: stdout: ""
Jul 20 00:45:17.331: INFO: update-demo-nautilus-lrfgd is created but not running
Jul 20 00:45:22.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8174'
Jul 20 00:45:22.436: INFO: stderr: ""
Jul 20 00:45:22.436: INFO: stdout: "update-demo-nautilus-lb59m update-demo-nautilus-lrfgd "
Jul 20 00:45:22.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lb59m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:22.541: INFO: stderr: ""
Jul 20 00:45:22.541: INFO: stdout: "true"
Jul 20 00:45:22.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lb59m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:22.645: INFO: stderr: ""
Jul 20 00:45:22.645: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 00:45:22.645: INFO: validating pod update-demo-nautilus-lb59m
Jul 20 00:45:22.649: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 00:45:22.649: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 00:45:22.649: INFO: update-demo-nautilus-lb59m is verified up and running
Jul 20 00:45:22.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrfgd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:22.738: INFO: stderr: ""
Jul 20 00:45:22.738: INFO: stdout: "true"
Jul 20 00:45:22.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lrfgd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8174'
Jul 20 00:45:22.836: INFO: stderr: ""
Jul 20 00:45:22.836: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 00:45:22.836: INFO: validating pod update-demo-nautilus-lrfgd
Jul 20 00:45:22.840: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 00:45:22.840: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 00:45:22.840: INFO: update-demo-nautilus-lrfgd is verified up and running
STEP: using delete to clean up resources
Jul 20 00:45:22.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8174'
Jul 20 00:45:22.942: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 00:45:22.942: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 20 00:45:22.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8174'
Jul 20 00:45:23.043: INFO: stderr: "No resources found.\n"
Jul 20 00:45:23.043: INFO: stdout: ""
Jul 20 00:45:23.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8174 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 20 00:45:23.142: INFO: stderr: ""
Jul 20 00:45:23.142: INFO: stdout: "update-demo-nautilus-lb59m\nupdate-demo-nautilus-lrfgd\n"
Jul 20 00:45:23.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8174'
Jul 20 00:45:23.740: INFO: stderr: "No resources found.\n"
Jul 20 00:45:23.740: INFO: stdout: ""
Jul 20 00:45:23.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8174 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 20 00:45:23.830: INFO: stderr: ""
Jul 20 00:45:23.830: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:45:23.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8174" for this suite.
Jul 20 00:45:46.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:45:46.224: INFO: namespace kubectl-8174 deletion completed in 22.390054772s

• [SLOW TEST:43.022 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:45:46.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul 20 00:45:53.559: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:45:54.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5874" for this suite.
Jul 20 00:46:16.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:46:16.779: INFO: namespace replicaset-5874 deletion completed in 22.139911393s

• [SLOW TEST:30.555 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:46:16.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-f14cf879-f17b-455c-a82d-ba012c248902
STEP: Creating a pod to test consume configMaps
Jul 20 00:46:16.900: INFO: Waiting up to 5m0s for pod "pod-configmaps-969f4b65-9c61-43fa-8aba-54cd491fb7c2" in namespace "configmap-5868" to be "success or failure"
Jul 20 00:46:16.909: INFO: Pod "pod-configmaps-969f4b65-9c61-43fa-8aba-54cd491fb7c2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.767223ms
Jul 20 00:46:18.913: INFO: Pod "pod-configmaps-969f4b65-9c61-43fa-8aba-54cd491fb7c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013312971s
Jul 20 00:46:20.916: INFO: Pod "pod-configmaps-969f4b65-9c61-43fa-8aba-54cd491fb7c2": Phase="Running", Reason="", readiness=true. Elapsed: 4.016127594s
Jul 20 00:46:22.919: INFO: Pod "pod-configmaps-969f4b65-9c61-43fa-8aba-54cd491fb7c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019624309s
STEP: Saw pod success
Jul 20 00:46:22.919: INFO: Pod "pod-configmaps-969f4b65-9c61-43fa-8aba-54cd491fb7c2" satisfied condition "success or failure"
Jul 20 00:46:22.922: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-969f4b65-9c61-43fa-8aba-54cd491fb7c2 container configmap-volume-test: 
STEP: delete the pod
Jul 20 00:46:22.953: INFO: Waiting for pod pod-configmaps-969f4b65-9c61-43fa-8aba-54cd491fb7c2 to disappear
Jul 20 00:46:22.957: INFO: Pod pod-configmaps-969f4b65-9c61-43fa-8aba-54cd491fb7c2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:46:22.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5868" for this suite.
Jul 20 00:46:28.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:46:29.064: INFO: namespace configmap-5868 deletion completed in 6.103448601s

• [SLOW TEST:12.285 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:46:29.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-639a1ca7-9590-48a6-9bfd-31cbe659a54c
STEP: Creating a pod to test consume secrets
Jul 20 00:46:29.341: INFO: Waiting up to 5m0s for pod "pod-secrets-0c6d7de5-9c38-498d-957f-551f37a3ccee" in namespace "secrets-6936" to be "success or failure"
Jul 20 00:46:29.387: INFO: Pod "pod-secrets-0c6d7de5-9c38-498d-957f-551f37a3ccee": Phase="Pending", Reason="", readiness=false. Elapsed: 45.624248ms
Jul 20 00:46:31.391: INFO: Pod "pod-secrets-0c6d7de5-9c38-498d-957f-551f37a3ccee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049313931s
Jul 20 00:46:33.395: INFO: Pod "pod-secrets-0c6d7de5-9c38-498d-957f-551f37a3ccee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053552923s
Jul 20 00:46:35.399: INFO: Pod "pod-secrets-0c6d7de5-9c38-498d-957f-551f37a3ccee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057529439s
STEP: Saw pod success
Jul 20 00:46:35.399: INFO: Pod "pod-secrets-0c6d7de5-9c38-498d-957f-551f37a3ccee" satisfied condition "success or failure"
Jul 20 00:46:35.402: INFO: Trying to get logs from node iruya-worker pod pod-secrets-0c6d7de5-9c38-498d-957f-551f37a3ccee container secret-volume-test: 
STEP: delete the pod
Jul 20 00:46:35.463: INFO: Waiting for pod pod-secrets-0c6d7de5-9c38-498d-957f-551f37a3ccee to disappear
Jul 20 00:46:35.483: INFO: Pod pod-secrets-0c6d7de5-9c38-498d-957f-551f37a3ccee no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:46:35.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6936" for this suite.
Jul 20 00:46:41.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:46:41.564: INFO: namespace secrets-6936 deletion completed in 6.077377571s

• [SLOW TEST:12.500 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:46:41.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jul 20 00:46:42.174: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:46:55.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8290" for this suite.
Jul 20 00:47:17.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:47:17.572: INFO: namespace init-container-8290 deletion completed in 22.208620631s

• [SLOW TEST:36.007 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:47:17.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jul 20 00:47:18.267: INFO: Pod name wrapped-volume-race-2dd5b231-996b-4047-9b10-27d402ca051a: Found 0 pods out of 5
Jul 20 00:47:23.276: INFO: Pod name wrapped-volume-race-2dd5b231-996b-4047-9b10-27d402ca051a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2dd5b231-996b-4047-9b10-27d402ca051a in namespace emptydir-wrapper-6174, will wait for the garbage collector to delete the pods
Jul 20 00:47:35.498: INFO: Deleting ReplicationController wrapped-volume-race-2dd5b231-996b-4047-9b10-27d402ca051a took: 7.793142ms
Jul 20 00:47:35.798: INFO: Terminating ReplicationController wrapped-volume-race-2dd5b231-996b-4047-9b10-27d402ca051a pods took: 300.341867ms
STEP: Creating RC which spawns configmap-volume pods
Jul 20 00:48:15.533: INFO: Pod name wrapped-volume-race-a4e0dea7-2279-47e2-99c0-424adbcc1160: Found 0 pods out of 5
Jul 20 00:48:20.570: INFO: Pod name wrapped-volume-race-a4e0dea7-2279-47e2-99c0-424adbcc1160: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a4e0dea7-2279-47e2-99c0-424adbcc1160 in namespace emptydir-wrapper-6174, will wait for the garbage collector to delete the pods
Jul 20 00:48:36.647: INFO: Deleting ReplicationController wrapped-volume-race-a4e0dea7-2279-47e2-99c0-424adbcc1160 took: 6.068943ms
Jul 20 00:48:36.947: INFO: Terminating ReplicationController wrapped-volume-race-a4e0dea7-2279-47e2-99c0-424adbcc1160 pods took: 300.294711ms
STEP: Creating RC which spawns configmap-volume pods
Jul 20 00:49:16.180: INFO: Pod name wrapped-volume-race-e22f698e-fec5-4fb0-aeb5-7820124bb135: Found 0 pods out of 5
Jul 20 00:49:21.188: INFO: Pod name wrapped-volume-race-e22f698e-fec5-4fb0-aeb5-7820124bb135: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e22f698e-fec5-4fb0-aeb5-7820124bb135 in namespace emptydir-wrapper-6174, will wait for the garbage collector to delete the pods
Jul 20 00:49:37.271: INFO: Deleting ReplicationController wrapped-volume-race-e22f698e-fec5-4fb0-aeb5-7820124bb135 took: 6.241554ms
Jul 20 00:49:37.572: INFO: Terminating ReplicationController wrapped-volume-race-e22f698e-fec5-4fb0-aeb5-7820124bb135 pods took: 300.262095ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:50:25.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6174" for this suite.
Jul 20 00:50:34.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:50:34.133: INFO: namespace emptydir-wrapper-6174 deletion completed in 8.10835274s

• [SLOW TEST:196.560 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:50:34.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-32eeaf9a-93a1-41b5-bb4d-aebfded1bf95
STEP: Creating a pod to test consume configMaps
Jul 20 00:50:34.334: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-abf3173a-268f-45cd-84ac-8558bda8affa" in namespace "projected-6911" to be "success or failure"
Jul 20 00:50:34.378: INFO: Pod "pod-projected-configmaps-abf3173a-268f-45cd-84ac-8558bda8affa": Phase="Pending", Reason="", readiness=false. Elapsed: 43.407135ms
Jul 20 00:50:36.444: INFO: Pod "pod-projected-configmaps-abf3173a-268f-45cd-84ac-8558bda8affa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109342939s
Jul 20 00:50:38.448: INFO: Pod "pod-projected-configmaps-abf3173a-268f-45cd-84ac-8558bda8affa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113753184s
Jul 20 00:50:40.453: INFO: Pod "pod-projected-configmaps-abf3173a-268f-45cd-84ac-8558bda8affa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.118390803s
STEP: Saw pod success
Jul 20 00:50:40.453: INFO: Pod "pod-projected-configmaps-abf3173a-268f-45cd-84ac-8558bda8affa" satisfied condition "success or failure"
Jul 20 00:50:40.456: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-abf3173a-268f-45cd-84ac-8558bda8affa container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 00:50:40.487: INFO: Waiting for pod pod-projected-configmaps-abf3173a-268f-45cd-84ac-8558bda8affa to disappear
Jul 20 00:50:40.506: INFO: Pod pod-projected-configmaps-abf3173a-268f-45cd-84ac-8558bda8affa no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:50:40.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6911" for this suite.
Jul 20 00:50:46.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:50:46.682: INFO: namespace projected-6911 deletion completed in 6.172187461s

• [SLOW TEST:12.549 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:50:46.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-f8840641-300b-4b4f-85ed-8c2c7656976c
STEP: Creating a pod to test consume configMaps
Jul 20 00:50:46.827: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-262579c1-701b-4e49-bf87-562c099931e5" in namespace "projected-8273" to be "success or failure"
Jul 20 00:50:46.843: INFO: Pod "pod-projected-configmaps-262579c1-701b-4e49-bf87-562c099931e5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.379481ms
Jul 20 00:50:48.848: INFO: Pod "pod-projected-configmaps-262579c1-701b-4e49-bf87-562c099931e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020608614s
Jul 20 00:50:50.899: INFO: Pod "pod-projected-configmaps-262579c1-701b-4e49-bf87-562c099931e5": Phase="Running", Reason="", readiness=true. Elapsed: 4.071895708s
Jul 20 00:50:52.902: INFO: Pod "pod-projected-configmaps-262579c1-701b-4e49-bf87-562c099931e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07535896s
STEP: Saw pod success
Jul 20 00:50:52.902: INFO: Pod "pod-projected-configmaps-262579c1-701b-4e49-bf87-562c099931e5" satisfied condition "success or failure"
Jul 20 00:50:52.983: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-262579c1-701b-4e49-bf87-562c099931e5 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 00:50:53.000: INFO: Waiting for pod pod-projected-configmaps-262579c1-701b-4e49-bf87-562c099931e5 to disappear
Jul 20 00:50:53.005: INFO: Pod pod-projected-configmaps-262579c1-701b-4e49-bf87-562c099931e5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:50:53.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8273" for this suite.
Jul 20 00:50:59.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:50:59.105: INFO: namespace projected-8273 deletion completed in 6.098101817s

• [SLOW TEST:12.423 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:50:59.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1415
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1415
STEP: Creating statefulset with conflicting port in namespace statefulset-1415
STEP: Waiting until pod test-pod will start running in namespace statefulset-1415
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1415
Jul 20 00:51:05.267: INFO: Observed stateful pod in namespace: statefulset-1415, name: ss-0, uid: 8e870ae3-0c62-44c4-8a6a-3af7f54f111f, status phase: Failed. Waiting for statefulset controller to delete.
Jul 20 00:51:05.281: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1415
STEP: Removing pod with conflicting port in namespace statefulset-1415
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1415 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jul 20 00:51:11.407: INFO: Deleting all statefulset in ns statefulset-1415
Jul 20 00:51:11.409: INFO: Scaling statefulset ss to 0
Jul 20 00:51:31.461: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 00:51:31.464: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:51:31.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1415" for this suite.
Jul 20 00:51:39.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:51:39.569: INFO: namespace statefulset-1415 deletion completed in 8.087140316s

• [SLOW TEST:40.464 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:51:39.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 20 00:51:39.773: INFO: Waiting up to 5m0s for pod "pod-3fa245ce-d825-426f-becb-38a66b3fb95d" in namespace "emptydir-3891" to be "success or failure"
Jul 20 00:51:39.815: INFO: Pod "pod-3fa245ce-d825-426f-becb-38a66b3fb95d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.177192ms
Jul 20 00:51:41.954: INFO: Pod "pod-3fa245ce-d825-426f-becb-38a66b3fb95d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180367759s
Jul 20 00:51:43.957: INFO: Pod "pod-3fa245ce-d825-426f-becb-38a66b3fb95d": Phase="Running", Reason="", readiness=true. Elapsed: 4.183678516s
Jul 20 00:51:45.961: INFO: Pod "pod-3fa245ce-d825-426f-becb-38a66b3fb95d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.187692527s
STEP: Saw pod success
Jul 20 00:51:45.961: INFO: Pod "pod-3fa245ce-d825-426f-becb-38a66b3fb95d" satisfied condition "success or failure"
Jul 20 00:51:45.963: INFO: Trying to get logs from node iruya-worker pod pod-3fa245ce-d825-426f-becb-38a66b3fb95d container test-container: 
STEP: delete the pod
Jul 20 00:51:46.140: INFO: Waiting for pod pod-3fa245ce-d825-426f-becb-38a66b3fb95d to disappear
Jul 20 00:51:46.167: INFO: Pod pod-3fa245ce-d825-426f-becb-38a66b3fb95d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:51:46.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3891" for this suite.
Jul 20 00:51:52.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:51:52.349: INFO: namespace emptydir-3891 deletion completed in 6.178505707s

• [SLOW TEST:12.779 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:51:52.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-ecb283d4-4499-4c22-b6b1-1309eddc2d26 in namespace container-probe-5814
Jul 20 00:51:58.587: INFO: Started pod test-webserver-ecb283d4-4499-4c22-b6b1-1309eddc2d26 in namespace container-probe-5814
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 00:51:58.590: INFO: Initial restart count of pod test-webserver-ecb283d4-4499-4c22-b6b1-1309eddc2d26 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:55:59.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5814" for this suite.
Jul 20 00:56:06.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:56:06.103: INFO: namespace container-probe-5814 deletion completed in 6.350109669s

• [SLOW TEST:253.754 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:56:06.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jul 20 00:56:06.171: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:56:14.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8190" for this suite.
Jul 20 00:56:20.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:56:20.781: INFO: namespace init-container-8190 deletion completed in 6.100082139s

• [SLOW TEST:14.678 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:56:20.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-1ffd82e7-75fe-48e0-b48d-e9573fdaf276
STEP: Creating a pod to test consume secrets
Jul 20 00:56:20.944: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60058397-0215-4eb5-8abb-cd22f791788d" in namespace "projected-8192" to be "success or failure"
Jul 20 00:56:20.961: INFO: Pod "pod-projected-secrets-60058397-0215-4eb5-8abb-cd22f791788d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.865955ms
Jul 20 00:56:22.965: INFO: Pod "pod-projected-secrets-60058397-0215-4eb5-8abb-cd22f791788d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021298699s
Jul 20 00:56:24.969: INFO: Pod "pod-projected-secrets-60058397-0215-4eb5-8abb-cd22f791788d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025185965s
Jul 20 00:56:26.973: INFO: Pod "pod-projected-secrets-60058397-0215-4eb5-8abb-cd22f791788d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029532469s
STEP: Saw pod success
Jul 20 00:56:26.973: INFO: Pod "pod-projected-secrets-60058397-0215-4eb5-8abb-cd22f791788d" satisfied condition "success or failure"
Jul 20 00:56:26.976: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-60058397-0215-4eb5-8abb-cd22f791788d container projected-secret-volume-test: 
STEP: delete the pod
Jul 20 00:56:27.005: INFO: Waiting for pod pod-projected-secrets-60058397-0215-4eb5-8abb-cd22f791788d to disappear
Jul 20 00:56:27.009: INFO: Pod pod-projected-secrets-60058397-0215-4eb5-8abb-cd22f791788d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:56:27.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8192" for this suite.
Jul 20 00:56:33.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:56:33.128: INFO: namespace projected-8192 deletion completed in 6.114614309s

• [SLOW TEST:12.346 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:56:33.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 20 00:56:33.269: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a86a7c2e-f106-446b-bc1e-2681632e9e1e" in namespace "downward-api-9090" to be "success or failure"
Jul 20 00:56:33.290: INFO: Pod "downwardapi-volume-a86a7c2e-f106-446b-bc1e-2681632e9e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.05035ms
Jul 20 00:56:35.294: INFO: Pod "downwardapi-volume-a86a7c2e-f106-446b-bc1e-2681632e9e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024511545s
Jul 20 00:56:37.298: INFO: Pod "downwardapi-volume-a86a7c2e-f106-446b-bc1e-2681632e9e1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028290893s
STEP: Saw pod success
Jul 20 00:56:37.298: INFO: Pod "downwardapi-volume-a86a7c2e-f106-446b-bc1e-2681632e9e1e" satisfied condition "success or failure"
Jul 20 00:56:37.301: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a86a7c2e-f106-446b-bc1e-2681632e9e1e container client-container: 
STEP: delete the pod
Jul 20 00:56:37.332: INFO: Waiting for pod downwardapi-volume-a86a7c2e-f106-446b-bc1e-2681632e9e1e to disappear
Jul 20 00:56:37.419: INFO: Pod downwardapi-volume-a86a7c2e-f106-446b-bc1e-2681632e9e1e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:56:37.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9090" for this suite.
Jul 20 00:56:43.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:56:43.513: INFO: namespace downward-api-9090 deletion completed in 6.090596035s

• [SLOW TEST:10.385 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:56:43.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7119/configmap-test-68792c41-3538-4d34-85a0-a7311bc6d48b
STEP: Creating a pod to test consume configMaps
Jul 20 00:56:43.640: INFO: Waiting up to 5m0s for pod "pod-configmaps-bed62319-47e9-4268-b234-b6c75fb689e0" in namespace "configmap-7119" to be "success or failure"
Jul 20 00:56:43.644: INFO: Pod "pod-configmaps-bed62319-47e9-4268-b234-b6c75fb689e0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.916852ms
Jul 20 00:56:45.737: INFO: Pod "pod-configmaps-bed62319-47e9-4268-b234-b6c75fb689e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09601839s
Jul 20 00:56:47.778: INFO: Pod "pod-configmaps-bed62319-47e9-4268-b234-b6c75fb689e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137886836s
STEP: Saw pod success
Jul 20 00:56:47.778: INFO: Pod "pod-configmaps-bed62319-47e9-4268-b234-b6c75fb689e0" satisfied condition "success or failure"
Jul 20 00:56:47.782: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-bed62319-47e9-4268-b234-b6c75fb689e0 container env-test: 
STEP: delete the pod
Jul 20 00:56:47.808: INFO: Waiting for pod pod-configmaps-bed62319-47e9-4268-b234-b6c75fb689e0 to disappear
Jul 20 00:56:47.854: INFO: Pod pod-configmaps-bed62319-47e9-4268-b234-b6c75fb689e0 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:56:47.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7119" for this suite.
Jul 20 00:56:53.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:56:53.956: INFO: namespace configmap-7119 deletion completed in 6.09821604s

• [SLOW TEST:10.442 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:56:53.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-2344
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2344 to expose endpoints map[]
Jul 20 00:56:54.073: INFO: Get endpoints failed (51.939991ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jul 20 00:56:55.077: INFO: successfully validated that service multi-endpoint-test in namespace services-2344 exposes endpoints map[] (1.055992109s elapsed)
STEP: Creating pod pod1 in namespace services-2344
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2344 to expose endpoints map[pod1:[100]]
Jul 20 00:56:59.169: INFO: successfully validated that service multi-endpoint-test in namespace services-2344 exposes endpoints map[pod1:[100]] (4.086003591s elapsed)
STEP: Creating pod pod2 in namespace services-2344
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2344 to expose endpoints map[pod1:[100] pod2:[101]]
Jul 20 00:57:03.292: INFO: successfully validated that service multi-endpoint-test in namespace services-2344 exposes endpoints map[pod1:[100] pod2:[101]] (4.118801899s elapsed)
STEP: Deleting pod pod1 in namespace services-2344
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2344 to expose endpoints map[pod2:[101]]
Jul 20 00:57:04.353: INFO: successfully validated that service multi-endpoint-test in namespace services-2344 exposes endpoints map[pod2:[101]] (1.050666095s elapsed)
STEP: Deleting pod pod2 in namespace services-2344
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2344 to expose endpoints map[]
Jul 20 00:57:05.479: INFO: successfully validated that service multi-endpoint-test in namespace services-2344 exposes endpoints map[] (1.121931891s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:57:05.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2344" for this suite.
Jul 20 00:57:11.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:57:11.626: INFO: namespace services-2344 deletion completed in 6.088042125s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:17.670 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:57:11.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 20 00:57:11.696: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 20 00:57:17.974: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jul 20 00:57:22.979: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 20 00:57:22.979: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jul 20 00:57:24.983: INFO: Creating deployment "test-rollover-deployment"
Jul 20 00:57:25.010: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jul 20 00:57:27.040: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jul 20 00:57:27.046: INFO: Ensure that both replica sets have 1 created replica
Jul 20 00:57:27.052: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jul 20 00:57:27.058: INFO: Updating deployment test-rollover-deployment
Jul 20 00:57:27.058: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jul 20 00:57:29.078: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jul 20 00:57:29.084: INFO: Make sure deployment "test-rollover-deployment" is complete
Jul 20 00:57:29.091: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 00:57:29.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803447, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 00:57:31.098: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 00:57:31.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803450, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 00:57:33.098: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 00:57:33.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803450, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 00:57:35.099: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 00:57:35.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803450, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 00:57:37.099: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 00:57:37.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803450, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 00:57:39.098: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 00:57:39.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803450, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730803445, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 00:57:41.099: INFO: 
Jul 20 00:57:41.099: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jul 20 00:57:41.107: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-373,SelfLink:/apis/apps/v1/namespaces/deployment-373/deployments/test-rollover-deployment,UID:6e6599f0-5d97-4096-9ee7-f041ab07d4b4,ResourceVersion:52448,Generation:2,CreationTimestamp:2020-07-20 00:57:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-20 00:57:25 +0000 UTC 2020-07-20 00:57:25 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-20 00:57:40 +0000 UTC 2020-07-20 00:57:25 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jul 20 00:57:41.111: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-373,SelfLink:/apis/apps/v1/namespaces/deployment-373/replicasets/test-rollover-deployment-854595fc44,UID:6d383d51-594d-4680-abb6-4a8ac8bcfbef,ResourceVersion:52437,Generation:2,CreationTimestamp:2020-07-20 00:57:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6e6599f0-5d97-4096-9ee7-f041ab07d4b4 0xc003267737 0xc003267738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul 20 00:57:41.111: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jul 20 00:57:41.112: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-373,SelfLink:/apis/apps/v1/namespaces/deployment-373/replicasets/test-rollover-controller,UID:780bcb1a-d761-4d0a-a960-b486508eec14,ResourceVersion:52446,Generation:2,CreationTimestamp:2020-07-20 00:57:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6e6599f0-5d97-4096-9ee7-f041ab07d4b4 0xc003267667 0xc003267668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 20 00:57:41.112: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-373,SelfLink:/apis/apps/v1/namespaces/deployment-373/replicasets/test-rollover-deployment-9b8b997cf,UID:01824bd7-18f6-4e70-ba13-c18bfd7a5841,ResourceVersion:52397,Generation:2,CreationTimestamp:2020-07-20 00:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6e6599f0-5d97-4096-9ee7-f041ab07d4b4 0xc003267800 0xc003267801}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 20 00:57:41.115: INFO: Pod "test-rollover-deployment-854595fc44-b6k5r" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-b6k5r,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-373,SelfLink:/api/v1/namespaces/deployment-373/pods/test-rollover-deployment-854595fc44-b6k5r,UID:9e4b82a9-a90d-447d-b580-19fa2c6fe790,ResourceVersion:52414,Generation:0,CreationTimestamp:2020-07-20 00:57:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 6d383d51-594d-4680-abb6-4a8ac8bcfbef 0xc0029e83c7 0xc0029e83c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w9wv7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w9wv7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-w9wv7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029e8440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029e8460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:57:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:57:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:57:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 00:57:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.151,StartTime:2020-07-20 00:57:27 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-20 00:57:30 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://162660e3ee5467739e77fb5c168c7e73c56c52e59bde20407d7d86ce6cf1e83a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:57:41.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-373" for this suite.
Jul 20 00:57:47.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:57:47.431: INFO: namespace deployment-373 deletion completed in 6.311817179s

• [SLOW TEST:29.553 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:57:47.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jul 20 00:57:47.518: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix820886019/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:57:47.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5470" for this suite.
Jul 20 00:57:53.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:57:53.730: INFO: namespace kubectl-5470 deletion completed in 6.140965879s

• [SLOW TEST:6.298 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:57:53.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-fa6871dc-bf07-4797-a977-2449bbfe0b81
STEP: Creating secret with name s-test-opt-upd-877e70b1-0981-4c95-adc6-b062654f6097
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-fa6871dc-bf07-4797-a977-2449bbfe0b81
STEP: Updating secret s-test-opt-upd-877e70b1-0981-4c95-adc6-b062654f6097
STEP: Creating secret with name s-test-opt-create-7c833fee-9a59-49a0-80c6-05e465a42416
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:58:01.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3056" for this suite.
Jul 20 00:58:24.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:58:24.081: INFO: namespace secrets-3056 deletion completed in 22.107015849s

• [SLOW TEST:30.351 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:58:24.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:58:28.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5312" for this suite.
Jul 20 00:58:34.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:58:34.333: INFO: namespace emptydir-wrapper-5312 deletion completed in 6.093099979s

• [SLOW TEST:10.252 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:58:34.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-8806/configmap-test-9caf5461-8109-4bb8-8224-7b18d50fe00c
STEP: Creating a pod to test consume configMaps
Jul 20 00:58:34.469: INFO: Waiting up to 5m0s for pod "pod-configmaps-60dbf305-73b0-48e1-b37f-e042602b8e0c" in namespace "configmap-8806" to be "success or failure"
Jul 20 00:58:34.473: INFO: Pod "pod-configmaps-60dbf305-73b0-48e1-b37f-e042602b8e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148124ms
Jul 20 00:58:36.565: INFO: Pod "pod-configmaps-60dbf305-73b0-48e1-b37f-e042602b8e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096039855s
Jul 20 00:58:38.569: INFO: Pod "pod-configmaps-60dbf305-73b0-48e1-b37f-e042602b8e0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100325728s
STEP: Saw pod success
Jul 20 00:58:38.569: INFO: Pod "pod-configmaps-60dbf305-73b0-48e1-b37f-e042602b8e0c" satisfied condition "success or failure"
Jul 20 00:58:38.588: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-60dbf305-73b0-48e1-b37f-e042602b8e0c container env-test: 
STEP: delete the pod
Jul 20 00:58:38.654: INFO: Waiting for pod pod-configmaps-60dbf305-73b0-48e1-b37f-e042602b8e0c to disappear
Jul 20 00:58:38.676: INFO: Pod pod-configmaps-60dbf305-73b0-48e1-b37f-e042602b8e0c no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:58:38.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8806" for this suite.
Jul 20 00:58:44.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:58:44.804: INFO: namespace configmap-8806 deletion completed in 6.12480779s

• [SLOW TEST:10.471 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:58:44.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 20 00:58:44.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f0fa308-f3d0-4cab-b3d2-f437a6ed4070" in namespace "projected-525" to be "success or failure"
Jul 20 00:58:44.870: INFO: Pod "downwardapi-volume-7f0fa308-f3d0-4cab-b3d2-f437a6ed4070": Phase="Pending", Reason="", readiness=false. Elapsed: 4.678799ms
Jul 20 00:58:46.874: INFO: Pod "downwardapi-volume-7f0fa308-f3d0-4cab-b3d2-f437a6ed4070": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008842454s
Jul 20 00:58:48.878: INFO: Pod "downwardapi-volume-7f0fa308-f3d0-4cab-b3d2-f437a6ed4070": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012440657s
STEP: Saw pod success
Jul 20 00:58:48.878: INFO: Pod "downwardapi-volume-7f0fa308-f3d0-4cab-b3d2-f437a6ed4070" satisfied condition "success or failure"
Jul 20 00:58:48.880: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7f0fa308-f3d0-4cab-b3d2-f437a6ed4070 container client-container: 
STEP: delete the pod
Jul 20 00:58:48.924: INFO: Waiting for pod downwardapi-volume-7f0fa308-f3d0-4cab-b3d2-f437a6ed4070 to disappear
Jul 20 00:58:48.941: INFO: Pod downwardapi-volume-7f0fa308-f3d0-4cab-b3d2-f437a6ed4070 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:58:48.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-525" for this suite.
Jul 20 00:58:54.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:58:55.037: INFO: namespace projected-525 deletion completed in 6.092064031s

• [SLOW TEST:10.232 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:58:55.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jul 20 00:58:59.636: INFO: Successfully updated pod "annotationupdateb850fe66-69d1-4f10-aadb-408a0967ef35"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 00:59:03.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2380" for this suite.
Jul 20 00:59:25.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 00:59:25.755: INFO: namespace projected-2380 deletion completed in 22.089709314s

• [SLOW TEST:30.717 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 00:59:25.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0720 01:00:06.262551       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 20 01:00:06.262: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:00:06.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-737" for this suite.
Jul 20 01:00:14.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:00:14.351: INFO: namespace gc-737 deletion completed in 8.084846109s

• [SLOW TEST:48.596 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:00:14.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:00:41.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1048" for this suite.
Jul 20 01:00:47.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:00:47.607: INFO: namespace namespaces-1048 deletion completed in 6.2065522s
STEP: Destroying namespace "nsdeletetest-6857" for this suite.
Jul 20 01:00:47.609: INFO: Namespace nsdeletetest-6857 was already deleted
STEP: Destroying namespace "nsdeletetest-8582" for this suite.
Jul 20 01:00:53.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:00:53.776: INFO: namespace nsdeletetest-8582 deletion completed in 6.166217527s

• [SLOW TEST:39.425 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:00:53.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jul 20 01:00:57.873: INFO: Pod pod-hostip-dc414c57-a1bd-480a-93c5-0e223bdca0ce has hostIP: 172.18.0.7
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:00:57.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9784" for this suite.
Jul 20 01:01:19.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:01:19.960: INFO: namespace pods-9784 deletion completed in 22.084038004s

• [SLOW TEST:26.185 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:01:19.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-0e40bb95-f9c7-4e92-b2c7-8e91bab52c21
STEP: Creating secret with name s-test-opt-upd-34160831-bd52-4765-8ae3-6ad935ad4860
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0e40bb95-f9c7-4e92-b2c7-8e91bab52c21
STEP: Updating secret s-test-opt-upd-34160831-bd52-4765-8ae3-6ad935ad4860
STEP: Creating secret with name s-test-opt-create-0c016cbc-9c90-42e5-904d-d22409b2c879
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:01:29.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3734" for this suite.
Jul 20 01:01:51.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:01:51.587: INFO: namespace projected-3734 deletion completed in 22.098529477s

• [SLOW TEST:31.626 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:01:51.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jul 20 01:01:55.720: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jul 20 01:02:10.812: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:02:10.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7554" for this suite.
Jul 20 01:02:16.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:02:16.915: INFO: namespace pods-7554 deletion completed in 6.096953453s

• [SLOW TEST:25.328 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:02:16.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 20 01:02:17.249: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4368fefc-589b-4db7-839a-47f4bb2918a7" in namespace "projected-6857" to be "success or failure"
Jul 20 01:02:17.346: INFO: Pod "downwardapi-volume-4368fefc-589b-4db7-839a-47f4bb2918a7": Phase="Pending", Reason="", readiness=false. Elapsed: 97.054698ms
Jul 20 01:02:19.350: INFO: Pod "downwardapi-volume-4368fefc-589b-4db7-839a-47f4bb2918a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101015891s
Jul 20 01:02:21.355: INFO: Pod "downwardapi-volume-4368fefc-589b-4db7-839a-47f4bb2918a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1051791s
STEP: Saw pod success
Jul 20 01:02:21.355: INFO: Pod "downwardapi-volume-4368fefc-589b-4db7-839a-47f4bb2918a7" satisfied condition "success or failure"
Jul 20 01:02:21.357: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4368fefc-589b-4db7-839a-47f4bb2918a7 container client-container: 
STEP: delete the pod
Jul 20 01:02:21.498: INFO: Waiting for pod downwardapi-volume-4368fefc-589b-4db7-839a-47f4bb2918a7 to disappear
Jul 20 01:02:21.573: INFO: Pod downwardapi-volume-4368fefc-589b-4db7-839a-47f4bb2918a7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:02:21.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6857" for this suite.
Jul 20 01:02:27.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:02:27.826: INFO: namespace projected-6857 deletion completed in 6.248655628s

• [SLOW TEST:10.910 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:02:27.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:02:34.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3716" for this suite.
Jul 20 01:03:14.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:03:14.274: INFO: namespace kubelet-test-3716 deletion completed in 40.104021065s

• [SLOW TEST:46.448 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:03:14.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jul 20 01:03:14.396: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3990,SelfLink:/api/v1/namespaces/watch-3990/configmaps/e2e-watch-test-resource-version,UID:968d8cee-5091-4a46-960f-92136438bef5,ResourceVersion:53655,Generation:0,CreationTimestamp:2020-07-20 01:03:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 20 01:03:14.397: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3990,SelfLink:/api/v1/namespaces/watch-3990/configmaps/e2e-watch-test-resource-version,UID:968d8cee-5091-4a46-960f-92136438bef5,ResourceVersion:53656,Generation:0,CreationTimestamp:2020-07-20 01:03:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:03:14.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3990" for this suite.
Jul 20 01:03:20.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:03:20.509: INFO: namespace watch-3990 deletion completed in 6.108862258s

• [SLOW TEST:6.234 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:03:20.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 20 01:03:20.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1442'
Jul 20 01:03:23.226: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 20 01:03:23.226: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jul 20 01:03:23.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1442'
Jul 20 01:03:23.363: INFO: stderr: ""
Jul 20 01:03:23.363: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:03:23.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1442" for this suite.
Jul 20 01:03:29.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:03:29.466: INFO: namespace kubectl-1442 deletion completed in 6.099624143s

• [SLOW TEST:8.957 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:03:29.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-9c8c931a-e38e-46b7-babb-c5894a46831a
STEP: Creating a pod to test consume secrets
Jul 20 01:03:29.543: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3ffcc439-738b-4ba3-9392-d55738d7970f" in namespace "projected-5826" to be "success or failure"
Jul 20 01:03:29.546: INFO: Pod "pod-projected-secrets-3ffcc439-738b-4ba3-9392-d55738d7970f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.568134ms
Jul 20 01:03:31.550: INFO: Pod "pod-projected-secrets-3ffcc439-738b-4ba3-9392-d55738d7970f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007603407s
Jul 20 01:03:33.554: INFO: Pod "pod-projected-secrets-3ffcc439-738b-4ba3-9392-d55738d7970f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010803749s
STEP: Saw pod success
Jul 20 01:03:33.554: INFO: Pod "pod-projected-secrets-3ffcc439-738b-4ba3-9392-d55738d7970f" satisfied condition "success or failure"
Jul 20 01:03:33.556: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-3ffcc439-738b-4ba3-9392-d55738d7970f container projected-secret-volume-test: 
STEP: delete the pod
Jul 20 01:03:33.584: INFO: Waiting for pod pod-projected-secrets-3ffcc439-738b-4ba3-9392-d55738d7970f to disappear
Jul 20 01:03:33.588: INFO: Pod pod-projected-secrets-3ffcc439-738b-4ba3-9392-d55738d7970f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:03:33.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5826" for this suite.
Jul 20 01:03:39.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:03:39.688: INFO: namespace projected-5826 deletion completed in 6.095830523s

• [SLOW TEST:10.221 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:03:39.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6687cfac-b3e4-488e-a341-7cc3d2e3037f
STEP: Creating a pod to test consume configMaps
Jul 20 01:03:40.117: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea094c50-c697-4aea-9896-52715e804295" in namespace "configmap-7090" to be "success or failure"
Jul 20 01:03:40.140: INFO: Pod "pod-configmaps-ea094c50-c697-4aea-9896-52715e804295": Phase="Pending", Reason="", readiness=false. Elapsed: 22.236207ms
Jul 20 01:03:42.144: INFO: Pod "pod-configmaps-ea094c50-c697-4aea-9896-52715e804295": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026010349s
Jul 20 01:03:44.147: INFO: Pod "pod-configmaps-ea094c50-c697-4aea-9896-52715e804295": Phase="Running", Reason="", readiness=true. Elapsed: 4.029906126s
Jul 20 01:03:46.152: INFO: Pod "pod-configmaps-ea094c50-c697-4aea-9896-52715e804295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034244924s
STEP: Saw pod success
Jul 20 01:03:46.152: INFO: Pod "pod-configmaps-ea094c50-c697-4aea-9896-52715e804295" satisfied condition "success or failure"
Jul 20 01:03:46.155: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ea094c50-c697-4aea-9896-52715e804295 container configmap-volume-test: 
STEP: delete the pod
Jul 20 01:03:46.180: INFO: Waiting for pod pod-configmaps-ea094c50-c697-4aea-9896-52715e804295 to disappear
Jul 20 01:03:46.214: INFO: Pod pod-configmaps-ea094c50-c697-4aea-9896-52715e804295 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:03:46.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7090" for this suite.
Jul 20 01:03:52.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:03:52.345: INFO: namespace configmap-7090 deletion completed in 6.127311439s

• [SLOW TEST:12.657 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:03:52.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-2bz9
STEP: Creating a pod to test atomic-volume-subpath
Jul 20 01:03:52.461: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2bz9" in namespace "subpath-8766" to be "success or failure"
Jul 20 01:03:52.517: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Pending", Reason="", readiness=false. Elapsed: 55.893437ms
Jul 20 01:03:54.760: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298955638s
Jul 20 01:03:56.764: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303062963s
Jul 20 01:03:58.769: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Running", Reason="", readiness=true. Elapsed: 6.307496162s
Jul 20 01:04:00.773: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Running", Reason="", readiness=true. Elapsed: 8.311721931s
Jul 20 01:04:02.777: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Running", Reason="", readiness=true. Elapsed: 10.315951546s
Jul 20 01:04:04.781: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Running", Reason="", readiness=true. Elapsed: 12.320301439s
Jul 20 01:04:06.787: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Running", Reason="", readiness=true. Elapsed: 14.325545675s
Jul 20 01:04:08.791: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Running", Reason="", readiness=true. Elapsed: 16.329670705s
Jul 20 01:04:10.795: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Running", Reason="", readiness=true. Elapsed: 18.333909499s
Jul 20 01:04:12.799: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Running", Reason="", readiness=true. Elapsed: 20.33800218s
Jul 20 01:04:14.804: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Running", Reason="", readiness=true. Elapsed: 22.34272544s
Jul 20 01:04:16.833: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Running", Reason="", readiness=true. Elapsed: 24.372081271s
Jul 20 01:04:18.837: INFO: Pod "pod-subpath-test-configmap-2bz9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.376315641s
STEP: Saw pod success
Jul 20 01:04:18.837: INFO: Pod "pod-subpath-test-configmap-2bz9" satisfied condition "success or failure"
Jul 20 01:04:18.841: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-2bz9 container test-container-subpath-configmap-2bz9: 
STEP: delete the pod
Jul 20 01:04:18.860: INFO: Waiting for pod pod-subpath-test-configmap-2bz9 to disappear
Jul 20 01:04:18.864: INFO: Pod pod-subpath-test-configmap-2bz9 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-2bz9
Jul 20 01:04:18.864: INFO: Deleting pod "pod-subpath-test-configmap-2bz9" in namespace "subpath-8766"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:04:18.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8766" for this suite.
Jul 20 01:04:25.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:04:25.267: INFO: namespace subpath-8766 deletion completed in 6.398621601s

• [SLOW TEST:32.922 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:04:25.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-16f4e71d-b2ab-42b2-a0d1-49f12a622e51
STEP: Creating a pod to test consume configMaps
Jul 20 01:04:25.428: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a321d03b-1558-4040-96e9-19110f96e457" in namespace "projected-2228" to be "success or failure"
Jul 20 01:04:25.470: INFO: Pod "pod-projected-configmaps-a321d03b-1558-4040-96e9-19110f96e457": Phase="Pending", Reason="", readiness=false. Elapsed: 41.880771ms
Jul 20 01:04:27.474: INFO: Pod "pod-projected-configmaps-a321d03b-1558-4040-96e9-19110f96e457": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046052757s
Jul 20 01:04:29.478: INFO: Pod "pod-projected-configmaps-a321d03b-1558-4040-96e9-19110f96e457": Phase="Running", Reason="", readiness=true. Elapsed: 4.049697218s
Jul 20 01:04:31.482: INFO: Pod "pod-projected-configmaps-a321d03b-1558-4040-96e9-19110f96e457": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053938292s
STEP: Saw pod success
Jul 20 01:04:31.482: INFO: Pod "pod-projected-configmaps-a321d03b-1558-4040-96e9-19110f96e457" satisfied condition "success or failure"
Jul 20 01:04:31.485: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-a321d03b-1558-4040-96e9-19110f96e457 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 01:04:31.507: INFO: Waiting for pod pod-projected-configmaps-a321d03b-1558-4040-96e9-19110f96e457 to disappear
Jul 20 01:04:31.528: INFO: Pod pod-projected-configmaps-a321d03b-1558-4040-96e9-19110f96e457 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:04:31.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2228" for this suite.
Jul 20 01:04:37.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:04:37.645: INFO: namespace projected-2228 deletion completed in 6.086826633s

• [SLOW TEST:12.378 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:04:37.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 20 01:04:42.328: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5cc653cd-ae39-45b8-a320-16348038bc88"
Jul 20 01:04:42.328: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5cc653cd-ae39-45b8-a320-16348038bc88" in namespace "pods-385" to be "terminated due to deadline exceeded"
Jul 20 01:04:42.361: INFO: Pod "pod-update-activedeadlineseconds-5cc653cd-ae39-45b8-a320-16348038bc88": Phase="Running", Reason="", readiness=true. Elapsed: 33.104847ms
Jul 20 01:04:44.365: INFO: Pod "pod-update-activedeadlineseconds-5cc653cd-ae39-45b8-a320-16348038bc88": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.037139968s
Jul 20 01:04:44.365: INFO: Pod "pod-update-activedeadlineseconds-5cc653cd-ae39-45b8-a320-16348038bc88" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:04:44.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-385" for this suite.
Jul 20 01:04:50.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:04:50.490: INFO: namespace pods-385 deletion completed in 6.120436039s

• [SLOW TEST:12.844 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:04:50.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 20 01:04:50.732: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul 20 01:04:50.754: INFO: Number of nodes with available pods: 0
Jul 20 01:04:50.754: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul 20 01:04:51.270: INFO: Number of nodes with available pods: 0
Jul 20 01:04:51.270: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:04:52.275: INFO: Number of nodes with available pods: 0
Jul 20 01:04:52.275: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:04:53.427: INFO: Number of nodes with available pods: 0
Jul 20 01:04:53.427: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:04:54.276: INFO: Number of nodes with available pods: 0
Jul 20 01:04:54.276: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:04:55.275: INFO: Number of nodes with available pods: 1
Jul 20 01:04:55.275: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul 20 01:04:55.304: INFO: Number of nodes with available pods: 1
Jul 20 01:04:55.304: INFO: Number of running nodes: 0, number of available pods: 1
Jul 20 01:04:56.308: INFO: Number of nodes with available pods: 0
Jul 20 01:04:56.308: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul 20 01:04:56.385: INFO: Number of nodes with available pods: 0
Jul 20 01:04:56.385: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:04:57.389: INFO: Number of nodes with available pods: 0
Jul 20 01:04:57.389: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:04:58.389: INFO: Number of nodes with available pods: 0
Jul 20 01:04:58.389: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:04:59.388: INFO: Number of nodes with available pods: 0
Jul 20 01:04:59.388: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:05:00.389: INFO: Number of nodes with available pods: 0
Jul 20 01:05:00.389: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:05:01.389: INFO: Number of nodes with available pods: 0
Jul 20 01:05:01.389: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:05:02.389: INFO: Number of nodes with available pods: 0
Jul 20 01:05:02.389: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:05:03.389: INFO: Number of nodes with available pods: 0
Jul 20 01:05:03.389: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:05:04.389: INFO: Number of nodes with available pods: 0
Jul 20 01:05:04.389: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:05:05.427: INFO: Number of nodes with available pods: 0
Jul 20 01:05:05.427: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:05:06.389: INFO: Number of nodes with available pods: 0
Jul 20 01:05:06.389: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:05:07.388: INFO: Number of nodes with available pods: 0
Jul 20 01:05:07.388: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:05:08.389: INFO: Number of nodes with available pods: 1
Jul 20 01:05:08.389: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6214, will wait for the garbage collector to delete the pods
Jul 20 01:05:08.452: INFO: Deleting DaemonSet.extensions daemon-set took: 5.519643ms
Jul 20 01:05:08.752: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.248041ms
Jul 20 01:05:15.155: INFO: Number of nodes with available pods: 0
Jul 20 01:05:15.155: INFO: Number of running nodes: 0, number of available pods: 0
Jul 20 01:05:15.157: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6214/daemonsets","resourceVersion":"54108"},"items":null}

Jul 20 01:05:15.159: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6214/pods","resourceVersion":"54108"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:05:15.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6214" for this suite.
Jul 20 01:05:21.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:05:21.303: INFO: namespace daemonsets-6214 deletion completed in 6.098522638s

• [SLOW TEST:30.812 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:05:21.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul 20 01:05:21.491: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-a,UID:230e6205-0253-4e1d-ba1a-a0b8bdb89874,ResourceVersion:54145,Generation:0,CreationTimestamp:2020-07-20 01:05:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 20 01:05:21.491: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-a,UID:230e6205-0253-4e1d-ba1a-a0b8bdb89874,ResourceVersion:54145,Generation:0,CreationTimestamp:2020-07-20 01:05:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul 20 01:05:31.499: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-a,UID:230e6205-0253-4e1d-ba1a-a0b8bdb89874,ResourceVersion:54165,Generation:0,CreationTimestamp:2020-07-20 01:05:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul 20 01:05:31.499: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-a,UID:230e6205-0253-4e1d-ba1a-a0b8bdb89874,ResourceVersion:54165,Generation:0,CreationTimestamp:2020-07-20 01:05:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul 20 01:05:41.508: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-a,UID:230e6205-0253-4e1d-ba1a-a0b8bdb89874,ResourceVersion:54187,Generation:0,CreationTimestamp:2020-07-20 01:05:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 20 01:05:41.509: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-a,UID:230e6205-0253-4e1d-ba1a-a0b8bdb89874,ResourceVersion:54187,Generation:0,CreationTimestamp:2020-07-20 01:05:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul 20 01:05:51.516: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-a,UID:230e6205-0253-4e1d-ba1a-a0b8bdb89874,ResourceVersion:54207,Generation:0,CreationTimestamp:2020-07-20 01:05:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 20 01:05:51.516: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-a,UID:230e6205-0253-4e1d-ba1a-a0b8bdb89874,ResourceVersion:54207,Generation:0,CreationTimestamp:2020-07-20 01:05:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul 20 01:06:01.525: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-b,UID:89dc4737-7b86-4884-a0d4-1cf864ce625b,ResourceVersion:54227,Generation:0,CreationTimestamp:2020-07-20 01:06:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 20 01:06:01.525: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-b,UID:89dc4737-7b86-4884-a0d4-1cf864ce625b,ResourceVersion:54227,Generation:0,CreationTimestamp:2020-07-20 01:06:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul 20 01:06:11.531: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-b,UID:89dc4737-7b86-4884-a0d4-1cf864ce625b,ResourceVersion:54248,Generation:0,CreationTimestamp:2020-07-20 01:06:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 20 01:06:11.532: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-406,SelfLink:/api/v1/namespaces/watch-406/configmaps/e2e-watch-test-configmap-b,UID:89dc4737-7b86-4884-a0d4-1cf864ce625b,ResourceVersion:54248,Generation:0,CreationTimestamp:2020-07-20 01:06:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:06:21.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-406" for this suite.
Jul 20 01:06:27.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:06:27.620: INFO: namespace watch-406 deletion completed in 6.085019638s

• [SLOW TEST:66.318 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:06:27.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 20 01:06:27.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a282bab1-fb6a-4071-af92-3abbfa9d095a" in namespace "downward-api-1387" to be "success or failure"
Jul 20 01:06:27.695: INFO: Pod "downwardapi-volume-a282bab1-fb6a-4071-af92-3abbfa9d095a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.862157ms
Jul 20 01:06:29.699: INFO: Pod "downwardapi-volume-a282bab1-fb6a-4071-af92-3abbfa9d095a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007296595s
Jul 20 01:06:31.745: INFO: Pod "downwardapi-volume-a282bab1-fb6a-4071-af92-3abbfa9d095a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053911179s
Jul 20 01:06:33.749: INFO: Pod "downwardapi-volume-a282bab1-fb6a-4071-af92-3abbfa9d095a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058172079s
STEP: Saw pod success
Jul 20 01:06:33.750: INFO: Pod "downwardapi-volume-a282bab1-fb6a-4071-af92-3abbfa9d095a" satisfied condition "success or failure"
Jul 20 01:06:33.752: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a282bab1-fb6a-4071-af92-3abbfa9d095a container client-container: 
STEP: delete the pod
Jul 20 01:06:33.795: INFO: Waiting for pod downwardapi-volume-a282bab1-fb6a-4071-af92-3abbfa9d095a to disappear
Jul 20 01:06:33.821: INFO: Pod downwardapi-volume-a282bab1-fb6a-4071-af92-3abbfa9d095a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:06:33.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1387" for this suite.
Jul 20 01:06:39.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:06:39.914: INFO: namespace downward-api-1387 deletion completed in 6.088068604s

• [SLOW TEST:12.293 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:06:39.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul 20 01:06:48.019: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:06:48.030: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:06:50.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:06:50.034: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:06:52.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:06:52.035: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:06:54.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:06:54.034: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:06:56.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:06:56.035: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:06:58.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:06:58.034: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:07:00.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:07:00.034: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:07:02.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:07:02.035: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:07:04.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:07:04.034: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:07:06.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:07:06.034: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:07:08.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:07:08.034: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:07:10.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:07:10.034: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:07:12.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:07:12.034: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:07:14.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:07:14.035: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 20 01:07:16.030: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 20 01:07:16.034: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:07:16.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9749" for this suite.
Jul 20 01:07:38.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:07:38.169: INFO: namespace container-lifecycle-hook-9749 deletion completed in 22.130402215s

• [SLOW TEST:58.255 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:07:38.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jul 20 01:07:38.251: INFO: Waiting up to 5m0s for pod "var-expansion-936fb58e-1c4a-4f02-a42a-e39b2e542e27" in namespace "var-expansion-9585" to be "success or failure"
Jul 20 01:07:38.258: INFO: Pod "var-expansion-936fb58e-1c4a-4f02-a42a-e39b2e542e27": Phase="Pending", Reason="", readiness=false. Elapsed: 7.371985ms
Jul 20 01:07:40.263: INFO: Pod "var-expansion-936fb58e-1c4a-4f02-a42a-e39b2e542e27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011889s
Jul 20 01:07:42.297: INFO: Pod "var-expansion-936fb58e-1c4a-4f02-a42a-e39b2e542e27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046427916s
STEP: Saw pod success
Jul 20 01:07:42.297: INFO: Pod "var-expansion-936fb58e-1c4a-4f02-a42a-e39b2e542e27" satisfied condition "success or failure"
Jul 20 01:07:42.300: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-936fb58e-1c4a-4f02-a42a-e39b2e542e27 container dapi-container: 
STEP: delete the pod
Jul 20 01:07:42.316: INFO: Waiting for pod var-expansion-936fb58e-1c4a-4f02-a42a-e39b2e542e27 to disappear
Jul 20 01:07:42.318: INFO: Pod var-expansion-936fb58e-1c4a-4f02-a42a-e39b2e542e27 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:07:42.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9585" for this suite.
Jul 20 01:07:48.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:07:48.457: INFO: namespace var-expansion-9585 deletion completed in 6.136699624s

• [SLOW TEST:10.287 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:07:48.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 20 01:07:49.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2885'
Jul 20 01:07:49.231: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 20 01:07:49.231: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jul 20 01:07:49.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-2885'
Jul 20 01:07:49.434: INFO: stderr: ""
Jul 20 01:07:49.434: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:07:49.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2885" for this suite.
Jul 20 01:08:11.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:08:11.586: INFO: namespace kubectl-2885 deletion completed in 22.141363015s

• [SLOW TEST:23.129 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:08:11.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-3836633b-63fa-422f-a280-943451ad49cc
STEP: Creating a pod to test consume configMaps
Jul 20 01:08:12.958: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9a4c022b-0d88-45b5-931c-e3f1c9cede68" in namespace "projected-4663" to be "success or failure"
Jul 20 01:08:13.015: INFO: Pod "pod-projected-configmaps-9a4c022b-0d88-45b5-931c-e3f1c9cede68": Phase="Pending", Reason="", readiness=false. Elapsed: 56.362762ms
Jul 20 01:08:15.019: INFO: Pod "pod-projected-configmaps-9a4c022b-0d88-45b5-931c-e3f1c9cede68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060669549s
Jul 20 01:08:17.040: INFO: Pod "pod-projected-configmaps-9a4c022b-0d88-45b5-931c-e3f1c9cede68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081394824s
STEP: Saw pod success
Jul 20 01:08:17.040: INFO: Pod "pod-projected-configmaps-9a4c022b-0d88-45b5-931c-e3f1c9cede68" satisfied condition "success or failure"
Jul 20 01:08:17.042: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-9a4c022b-0d88-45b5-931c-e3f1c9cede68 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 01:08:17.064: INFO: Waiting for pod pod-projected-configmaps-9a4c022b-0d88-45b5-931c-e3f1c9cede68 to disappear
Jul 20 01:08:17.068: INFO: Pod pod-projected-configmaps-9a4c022b-0d88-45b5-931c-e3f1c9cede68 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:08:17.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4663" for this suite.
Jul 20 01:08:23.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:08:23.160: INFO: namespace projected-4663 deletion completed in 6.088576395s

• [SLOW TEST:11.574 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:08:23.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jul 20 01:08:23.238: INFO: Waiting up to 5m0s for pod "downward-api-e3832a4b-b7f1-4cc5-b299-f8fe70ad0e14" in namespace "downward-api-7788" to be "success or failure"
Jul 20 01:08:23.253: INFO: Pod "downward-api-e3832a4b-b7f1-4cc5-b299-f8fe70ad0e14": Phase="Pending", Reason="", readiness=false. Elapsed: 15.161042ms
Jul 20 01:08:25.257: INFO: Pod "downward-api-e3832a4b-b7f1-4cc5-b299-f8fe70ad0e14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019048837s
Jul 20 01:08:27.261: INFO: Pod "downward-api-e3832a4b-b7f1-4cc5-b299-f8fe70ad0e14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022457898s
STEP: Saw pod success
Jul 20 01:08:27.261: INFO: Pod "downward-api-e3832a4b-b7f1-4cc5-b299-f8fe70ad0e14" satisfied condition "success or failure"
Jul 20 01:08:27.263: INFO: Trying to get logs from node iruya-worker pod downward-api-e3832a4b-b7f1-4cc5-b299-f8fe70ad0e14 container dapi-container: 
STEP: delete the pod
Jul 20 01:08:27.389: INFO: Waiting for pod downward-api-e3832a4b-b7f1-4cc5-b299-f8fe70ad0e14 to disappear
Jul 20 01:08:27.409: INFO: Pod downward-api-e3832a4b-b7f1-4cc5-b299-f8fe70ad0e14 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:08:27.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7788" for this suite.
Jul 20 01:08:33.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:08:33.505: INFO: namespace downward-api-7788 deletion completed in 6.092403379s

• [SLOW TEST:10.345 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:08:33.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jul 20 01:08:33.548: INFO: Waiting up to 5m0s for pod "pod-1d44331d-abb5-477f-b089-068e11b5b88f" in namespace "emptydir-223" to be "success or failure"
Jul 20 01:08:33.561: INFO: Pod "pod-1d44331d-abb5-477f-b089-068e11b5b88f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.997547ms
Jul 20 01:08:35.566: INFO: Pod "pod-1d44331d-abb5-477f-b089-068e11b5b88f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017289142s
Jul 20 01:08:37.570: INFO: Pod "pod-1d44331d-abb5-477f-b089-068e11b5b88f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021593982s
STEP: Saw pod success
Jul 20 01:08:37.570: INFO: Pod "pod-1d44331d-abb5-477f-b089-068e11b5b88f" satisfied condition "success or failure"
Jul 20 01:08:37.573: INFO: Trying to get logs from node iruya-worker2 pod pod-1d44331d-abb5-477f-b089-068e11b5b88f container test-container: 
STEP: delete the pod
Jul 20 01:08:37.602: INFO: Waiting for pod pod-1d44331d-abb5-477f-b089-068e11b5b88f to disappear
Jul 20 01:08:37.742: INFO: Pod pod-1d44331d-abb5-477f-b089-068e11b5b88f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:08:37.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-223" for this suite.
Jul 20 01:08:43.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:08:43.908: INFO: namespace emptydir-223 deletion completed in 6.162879571s

• [SLOW TEST:10.403 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:08:43.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:08:50.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9566" for this suite.
Jul 20 01:08:56.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:08:56.549: INFO: namespace namespaces-9566 deletion completed in 6.170648423s
STEP: Destroying namespace "nsdeletetest-8386" for this suite.
Jul 20 01:08:56.551: INFO: Namespace nsdeletetest-8386 was already deleted
STEP: Destroying namespace "nsdeletetest-833" for this suite.
Jul 20 01:09:02.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:09:02.695: INFO: namespace nsdeletetest-833 deletion completed in 6.144251176s

• [SLOW TEST:18.787 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:09:02.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 20 01:09:02.784: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3bde0f29-fd53-4fd5-be00-d3cf418959f6" in namespace "downward-api-1211" to be "success or failure"
Jul 20 01:09:02.788: INFO: Pod "downwardapi-volume-3bde0f29-fd53-4fd5-be00-d3cf418959f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.239019ms
Jul 20 01:09:04.927: INFO: Pod "downwardapi-volume-3bde0f29-fd53-4fd5-be00-d3cf418959f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14275132s
Jul 20 01:09:07.017: INFO: Pod "downwardapi-volume-3bde0f29-fd53-4fd5-be00-d3cf418959f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23253663s
Jul 20 01:09:09.022: INFO: Pod "downwardapi-volume-3bde0f29-fd53-4fd5-be00-d3cf418959f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.237155767s
STEP: Saw pod success
Jul 20 01:09:09.022: INFO: Pod "downwardapi-volume-3bde0f29-fd53-4fd5-be00-d3cf418959f6" satisfied condition "success or failure"
Jul 20 01:09:09.025: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3bde0f29-fd53-4fd5-be00-d3cf418959f6 container client-container: 
STEP: delete the pod
Jul 20 01:09:09.066: INFO: Waiting for pod downwardapi-volume-3bde0f29-fd53-4fd5-be00-d3cf418959f6 to disappear
Jul 20 01:09:09.069: INFO: Pod downwardapi-volume-3bde0f29-fd53-4fd5-be00-d3cf418959f6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:09:09.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1211" for this suite.
Jul 20 01:09:15.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:09:15.170: INFO: namespace downward-api-1211 deletion completed in 6.099238372s

• [SLOW TEST:12.474 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:09:15.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-e8251592-224d-497c-8027-c6b0196926fa
STEP: Creating a pod to test consume configMaps
Jul 20 01:09:15.251: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f54283d1-6720-40bb-82eb-6a5015ba24a9" in namespace "projected-1657" to be "success or failure"
Jul 20 01:09:15.255: INFO: Pod "pod-projected-configmaps-f54283d1-6720-40bb-82eb-6a5015ba24a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.839707ms
Jul 20 01:09:17.538: INFO: Pod "pod-projected-configmaps-f54283d1-6720-40bb-82eb-6a5015ba24a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287136886s
Jul 20 01:09:19.543: INFO: Pod "pod-projected-configmaps-f54283d1-6720-40bb-82eb-6a5015ba24a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291629514s
Jul 20 01:09:21.640: INFO: Pod "pod-projected-configmaps-f54283d1-6720-40bb-82eb-6a5015ba24a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389284052s
Jul 20 01:09:23.644: INFO: Pod "pod-projected-configmaps-f54283d1-6720-40bb-82eb-6a5015ba24a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.392674545s
STEP: Saw pod success
Jul 20 01:09:23.644: INFO: Pod "pod-projected-configmaps-f54283d1-6720-40bb-82eb-6a5015ba24a9" satisfied condition "success or failure"
Jul 20 01:09:23.646: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-f54283d1-6720-40bb-82eb-6a5015ba24a9 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 01:09:23.899: INFO: Waiting for pod pod-projected-configmaps-f54283d1-6720-40bb-82eb-6a5015ba24a9 to disappear
Jul 20 01:09:23.944: INFO: Pod pod-projected-configmaps-f54283d1-6720-40bb-82eb-6a5015ba24a9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:09:23.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1657" for this suite.
Jul 20 01:09:30.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:09:30.251: INFO: namespace projected-1657 deletion completed in 6.303633535s

• [SLOW TEST:15.081 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:09:30.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 20 01:09:30.823: INFO: Create a RollingUpdate DaemonSet
Jul 20 01:09:30.826: INFO: Check that daemon pods launch on every node of the cluster
Jul 20 01:09:30.964: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 01:09:30.967: INFO: Number of nodes with available pods: 0
Jul 20 01:09:30.967: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:09:31.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 01:09:31.975: INFO: Number of nodes with available pods: 0
Jul 20 01:09:31.975: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:09:32.974: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 01:09:32.977: INFO: Number of nodes with available pods: 0
Jul 20 01:09:32.977: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:09:33.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 01:09:33.975: INFO: Number of nodes with available pods: 0
Jul 20 01:09:33.975: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:09:34.976: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 01:09:34.979: INFO: Number of nodes with available pods: 1
Jul 20 01:09:34.979: INFO: Node iruya-worker is running more than one daemon pod
Jul 20 01:09:35.973: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 01:09:35.976: INFO: Number of nodes with available pods: 2
Jul 20 01:09:35.976: INFO: Number of running nodes: 2, number of available pods: 2
Jul 20 01:09:35.976: INFO: Update the DaemonSet to trigger a rollout
Jul 20 01:09:35.983: INFO: Updating DaemonSet daemon-set
Jul 20 01:09:46.004: INFO: Roll back the DaemonSet before rollout is complete
Jul 20 01:09:46.011: INFO: Updating DaemonSet daemon-set
Jul 20 01:09:46.011: INFO: Make sure DaemonSet rollback is complete
Jul 20 01:09:46.288: INFO: Wrong image for pod: daemon-set-676hf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jul 20 01:09:46.288: INFO: Pod daemon-set-676hf is not available
Jul 20 01:09:46.331: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 01:09:47.335: INFO: Wrong image for pod: daemon-set-676hf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jul 20 01:09:47.335: INFO: Pod daemon-set-676hf is not available
Jul 20 01:09:47.338: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 01:09:48.413: INFO: Wrong image for pod: daemon-set-676hf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jul 20 01:09:48.413: INFO: Pod daemon-set-676hf is not available
Jul 20 01:09:48.416: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 01:09:49.335: INFO: Wrong image for pod: daemon-set-676hf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jul 20 01:09:49.335: INFO: Pod daemon-set-676hf is not available
Jul 20 01:09:49.338: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 01:09:50.335: INFO: Pod daemon-set-mbmjs is not available
Jul 20 01:09:50.522: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 01:09:51.335: INFO: Pod daemon-set-mbmjs is not available
Jul 20 01:09:51.340: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8088, will wait for the garbage collector to delete the pods
Jul 20 01:09:51.404: INFO: Deleting DaemonSet.extensions daemon-set took: 5.920608ms
Jul 20 01:09:51.704: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.241359ms
Jul 20 01:09:54.507: INFO: Number of nodes with available pods: 0
Jul 20 01:09:54.507: INFO: Number of running nodes: 0, number of available pods: 0
Jul 20 01:09:54.509: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8088/daemonsets","resourceVersion":"55021"},"items":null}

Jul 20 01:09:54.512: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8088/pods","resourceVersion":"55021"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:09:54.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8088" for this suite.
Jul 20 01:10:02.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:10:02.616: INFO: namespace daemonsets-8088 deletion completed in 8.091319207s

• [SLOW TEST:32.364 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:10:02.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7071.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7071.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7071.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 01:10:10.801: INFO: DNS probes using dns-test-ce13e913-1ad2-4658-8b9e-c59e65caf81f succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7071.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7071.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7071.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 01:10:18.943: INFO: File wheezy_udp@dns-test-service-3.dns-7071.svc.cluster.local from pod  dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 20 01:10:18.946: INFO: File jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local from pod  dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 20 01:10:18.946: INFO: Lookups using dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 failed for: [wheezy_udp@dns-test-service-3.dns-7071.svc.cluster.local jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local]

Jul 20 01:10:23.952: INFO: File wheezy_udp@dns-test-service-3.dns-7071.svc.cluster.local from pod  dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 20 01:10:23.956: INFO: File jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local from pod  dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 20 01:10:23.956: INFO: Lookups using dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 failed for: [wheezy_udp@dns-test-service-3.dns-7071.svc.cluster.local jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local]

Jul 20 01:10:28.954: INFO: File wheezy_udp@dns-test-service-3.dns-7071.svc.cluster.local from pod  dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 20 01:10:28.957: INFO: File jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local from pod  dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 20 01:10:28.957: INFO: Lookups using dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 failed for: [wheezy_udp@dns-test-service-3.dns-7071.svc.cluster.local jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local]

Jul 20 01:10:33.951: INFO: File wheezy_udp@dns-test-service-3.dns-7071.svc.cluster.local from pod  dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 20 01:10:33.955: INFO: File jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local from pod  dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 20 01:10:33.955: INFO: Lookups using dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 failed for: [wheezy_udp@dns-test-service-3.dns-7071.svc.cluster.local jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local]

Jul 20 01:10:38.955: INFO: File jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local from pod  dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 20 01:10:38.955: INFO: Lookups using dns-7071/dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 failed for: [jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local]

Jul 20 01:10:43.954: INFO: DNS probes using dns-test-fe8fb69a-a488-4ee9-9db9-514e687a8e97 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7071.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7071.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7071.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7071.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 01:10:50.611: INFO: DNS probes using dns-test-b892baed-1567-4e8a-a5ac-71d44f421ece succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:10:50.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7071" for this suite.
Jul 20 01:10:56.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:10:56.821: INFO: namespace dns-7071 deletion completed in 6.102386901s

• [SLOW TEST:54.204 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:10:56.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jul 20 01:10:56.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jul 20 01:11:00.723: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0720 01:11:00.651377    3074 log.go:172] (0xc000972420) (0xc0005286e0) Create stream\nI0720 01:11:00.651447    3074 log.go:172] (0xc000972420) (0xc0005286e0) Stream added, broadcasting: 1\nI0720 01:11:00.655796    3074 log.go:172] (0xc000972420) Reply frame received for 1\nI0720 01:11:00.655839    3074 log.go:172] (0xc000972420) (0xc0005280a0) Create stream\nI0720 01:11:00.655848    3074 log.go:172] (0xc000972420) (0xc0005280a0) Stream added, broadcasting: 3\nI0720 01:11:00.657035    3074 log.go:172] (0xc000972420) Reply frame received for 3\nI0720 01:11:00.657082    3074 log.go:172] (0xc000972420) (0xc000184000) Create stream\nI0720 01:11:00.657098    3074 log.go:172] (0xc000972420) (0xc000184000) Stream added, broadcasting: 5\nI0720 01:11:00.657999    3074 log.go:172] (0xc000972420) Reply frame received for 5\nI0720 01:11:00.658035    3074 log.go:172] (0xc000972420) (0xc000220000) Create stream\nI0720 01:11:00.658046    3074 log.go:172] (0xc000972420) (0xc000220000) Stream added, broadcasting: 7\nI0720 01:11:00.658833    3074 log.go:172] (0xc000972420) Reply frame received for 7\nI0720 01:11:00.658973    3074 log.go:172] (0xc0005280a0) (3) Writing data frame\nI0720 01:11:00.659096    3074 log.go:172] (0xc0005280a0) (3) Writing data frame\nI0720 01:11:00.659827    3074 log.go:172] (0xc000972420) Data frame received for 5\nI0720 01:11:00.659846    3074 log.go:172] (0xc000184000) (5) Data frame handling\nI0720 01:11:00.659859    3074 log.go:172] (0xc000184000) (5) Data frame sent\nI0720 01:11:00.660519    3074 log.go:172] (0xc000972420) Data frame received for 5\nI0720 01:11:00.660535    3074 log.go:172] (0xc000184000) (5) Data frame handling\nI0720 01:11:00.660550    3074 log.go:172] (0xc000184000) (5) Data frame sent\nI0720 01:11:00.699868    3074 log.go:172] (0xc000972420) Data frame received for 5\nI0720 01:11:00.699893    3074 log.go:172] (0xc000184000) (5) Data frame handling\nI0720 01:11:00.700278    3074 log.go:172] (0xc000972420) Data frame received for 7\nI0720 01:11:00.700298    3074 log.go:172] (0xc000220000) (7) Data frame handling\nI0720 01:11:00.700562    3074 log.go:172] (0xc000972420) Data frame received for 1\nI0720 01:11:00.700573    3074 log.go:172] (0xc0005286e0) (1) Data frame handling\nI0720 01:11:00.700592    3074 log.go:172] (0xc0005286e0) (1) Data frame sent\nI0720 01:11:00.700605    3074 log.go:172] (0xc000972420) (0xc0005286e0) Stream removed, broadcasting: 1\nI0720 01:11:00.700679    3074 log.go:172] (0xc000972420) (0xc0005286e0) Stream removed, broadcasting: 1\nI0720 01:11:00.700689    3074 log.go:172] (0xc000972420) (0xc0005280a0) Stream removed, broadcasting: 3\nI0720 01:11:00.700709    3074 log.go:172] (0xc000972420) (0xc000184000) Stream removed, broadcasting: 5\nI0720 01:11:00.700860    3074 log.go:172] (0xc000972420) (0xc000220000) Stream removed, broadcasting: 7\nI0720 01:11:00.700921    3074 log.go:172] (0xc000972420) Go away received\n"
Jul 20 01:11:00.724: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:11:02.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1453" for this suite.
Jul 20 01:11:08.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:11:08.893: INFO: namespace kubectl-1453 deletion completed in 6.160368997s

• [SLOW TEST:12.072 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:11:08.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-dc531023-fe0a-43f1-b69c-c0f208312f95
STEP: Creating a pod to test consume secrets
Jul 20 01:11:08.976: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a50885da-559b-4ca4-b90d-a73af450ca04" in namespace "projected-9766" to be "success or failure"
Jul 20 01:11:08.982: INFO: Pod "pod-projected-secrets-a50885da-559b-4ca4-b90d-a73af450ca04": Phase="Pending", Reason="", readiness=false. Elapsed: 5.332721ms
Jul 20 01:11:10.986: INFO: Pod "pod-projected-secrets-a50885da-559b-4ca4-b90d-a73af450ca04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00974543s
Jul 20 01:11:12.990: INFO: Pod "pod-projected-secrets-a50885da-559b-4ca4-b90d-a73af450ca04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013807756s
STEP: Saw pod success
Jul 20 01:11:12.990: INFO: Pod "pod-projected-secrets-a50885da-559b-4ca4-b90d-a73af450ca04" satisfied condition "success or failure"
Jul 20 01:11:12.993: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-a50885da-559b-4ca4-b90d-a73af450ca04 container projected-secret-volume-test: 
STEP: delete the pod
Jul 20 01:11:13.019: INFO: Waiting for pod pod-projected-secrets-a50885da-559b-4ca4-b90d-a73af450ca04 to disappear
Jul 20 01:11:13.024: INFO: Pod pod-projected-secrets-a50885da-559b-4ca4-b90d-a73af450ca04 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:11:13.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9766" for this suite.
Jul 20 01:11:19.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:11:19.114: INFO: namespace projected-9766 deletion completed in 6.086552523s

• [SLOW TEST:10.219 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 20 01:11:19.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-68b285c8-a283-4bc7-9c6b-305280041cf6
STEP: Creating a pod to test consume secrets
Jul 20 01:11:19.206: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-92822ad4-6496-44ad-af59-edc33fe0a7ed" in namespace "projected-9009" to be "success or failure"
Jul 20 01:11:19.209: INFO: Pod "pod-projected-secrets-92822ad4-6496-44ad-af59-edc33fe0a7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447547ms
Jul 20 01:11:21.213: INFO: Pod "pod-projected-secrets-92822ad4-6496-44ad-af59-edc33fe0a7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006484463s
Jul 20 01:11:23.217: INFO: Pod "pod-projected-secrets-92822ad4-6496-44ad-af59-edc33fe0a7ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010381134s
STEP: Saw pod success
Jul 20 01:11:23.217: INFO: Pod "pod-projected-secrets-92822ad4-6496-44ad-af59-edc33fe0a7ed" satisfied condition "success or failure"
Jul 20 01:11:23.220: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-92822ad4-6496-44ad-af59-edc33fe0a7ed container projected-secret-volume-test: 
STEP: delete the pod
Jul 20 01:11:23.289: INFO: Waiting for pod pod-projected-secrets-92822ad4-6496-44ad-af59-edc33fe0a7ed to disappear
Jul 20 01:11:23.311: INFO: Pod pod-projected-secrets-92822ad4-6496-44ad-af59-edc33fe0a7ed no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 20 01:11:23.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9009" for this suite.
Jul 20 01:11:29.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 20 01:11:29.406: INFO: namespace projected-9009 deletion completed in 6.091378412s

• [SLOW TEST:10.292 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSJul 20 01:11:29.406: INFO: Running AfterSuite actions on all nodes
Jul 20 01:11:29.406: INFO: Running AfterSuite actions on node 1
Jul 20 01:11:29.406: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 6189.592 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS