I0511 17:19:02.024006 7 e2e.go:243] Starting e2e run "989177ec-acd0-4485-b124-7c64419c8a75" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589217541 - Will randomize all specs Will run 215 of 4412 specs May 11 17:19:02.215: INFO: >>> kubeConfig: /root/.kube/config May 11 17:19:02.219: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 11 17:19:02.236: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 11 17:19:02.270: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 11 17:19:02.270: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 11 17:19:02.270: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 11 17:19:02.277: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 11 17:19:02.277: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 11 17:19:02.277: INFO: e2e test version: v1.15.11 May 11 17:19:02.278: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:19:02.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods May 11 17:19:02.579: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 17:19:02.581: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:19:10.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7470" for this suite. May 11 17:20:05.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:20:05.265: INFO: namespace pods-7470 deletion completed in 54.575946175s • [SLOW TEST:62.986 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:20:05.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-45711cfa-16b3-4d55-b5f4-edfa8c5e7ac1 in namespace container-probe-4050 May 11 17:20:16.414: INFO: Started pod liveness-45711cfa-16b3-4d55-b5f4-edfa8c5e7ac1 in namespace container-probe-4050 STEP: checking the pod's current state and verifying that restartCount is present May 11 17:20:16.417: INFO: Initial restart count of pod liveness-45711cfa-16b3-4d55-b5f4-edfa8c5e7ac1 is 0 May 11 17:20:34.556: INFO: Restart count of pod container-probe-4050/liveness-45711cfa-16b3-4d55-b5f4-edfa8c5e7ac1 is now 1 (18.138616365s elapsed) May 11 17:20:54.951: INFO: Restart count of pod container-probe-4050/liveness-45711cfa-16b3-4d55-b5f4-edfa8c5e7ac1 is now 2 (38.53377779s elapsed) May 11 17:21:12.205: INFO: Restart count of pod container-probe-4050/liveness-45711cfa-16b3-4d55-b5f4-edfa8c5e7ac1 is now 3 (55.788421768s elapsed) May 11 17:21:35.347: INFO: Restart count of pod container-probe-4050/liveness-45711cfa-16b3-4d55-b5f4-edfa8c5e7ac1 is now 4 (1m18.929656147s elapsed) May 11 17:22:33.828: INFO: Restart count of pod container-probe-4050/liveness-45711cfa-16b3-4d55-b5f4-edfa8c5e7ac1 is now 5 (2m17.410854551s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:22:33.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4050" for this suite. May 11 17:22:40.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:22:40.503: INFO: namespace container-probe-4050 deletion completed in 6.500822773s • [SLOW TEST:155.237 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:22:40.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:22:40.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-881" for this suite. May 11 17:22:46.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:22:47.155: INFO: namespace services-881 deletion completed in 6.437966741s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.652 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:22:47.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-10063a42-4b04-46d1-a33d-5b1dfe210b1b STEP: Creating a pod to test consume secrets May 11 17:22:47.397: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0f44c21a-b04a-43cd-bc9a-c2c7994a5cf3" in namespace "projected-1124" to be "success or failure" May 11 17:22:47.409: INFO: Pod "pod-projected-secrets-0f44c21a-b04a-43cd-bc9a-c2c7994a5cf3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.643589ms May 11 17:22:49.450: INFO: Pod "pod-projected-secrets-0f44c21a-b04a-43cd-bc9a-c2c7994a5cf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052947724s May 11 17:22:51.455: INFO: Pod "pod-projected-secrets-0f44c21a-b04a-43cd-bc9a-c2c7994a5cf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057791694s May 11 17:22:53.459: INFO: Pod "pod-projected-secrets-0f44c21a-b04a-43cd-bc9a-c2c7994a5cf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061474942s STEP: Saw pod success May 11 17:22:53.459: INFO: Pod "pod-projected-secrets-0f44c21a-b04a-43cd-bc9a-c2c7994a5cf3" satisfied condition "success or failure" May 11 17:22:53.462: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-0f44c21a-b04a-43cd-bc9a-c2c7994a5cf3 container projected-secret-volume-test: STEP: delete the pod May 11 17:22:53.930: INFO: Waiting for pod pod-projected-secrets-0f44c21a-b04a-43cd-bc9a-c2c7994a5cf3 to disappear May 11 17:22:53.964: INFO: Pod pod-projected-secrets-0f44c21a-b04a-43cd-bc9a-c2c7994a5cf3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:22:53.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1124" for this suite. May 11 17:23:00.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:23:00.239: INFO: namespace projected-1124 deletion completed in 6.272233517s • [SLOW TEST:13.084 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:23:00.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 17:23:00.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8659' May 11 17:23:08.820: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 17:23:08.820: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 11 17:23:11.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8659' May 11 17:23:11.417: INFO: stderr: "" May 11 17:23:11.417: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:23:11.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8659" for this suite. May 11 17:23:33.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:23:33.629: INFO: namespace kubectl-8659 deletion completed in 22.209943251s • [SLOW TEST:33.390 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:23:33.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-6372/secret-test-705374be-d4ce-4290-a99f-a94267e74f8e STEP: Creating a pod to test consume secrets May 11 17:23:33.723: INFO: Waiting up to 5m0s for pod "pod-configmaps-69e1136f-c7de-45c5-83c1-91a4220bb61c" in namespace "secrets-6372" to be "success or failure" May 11 17:23:33.726: INFO: Pod "pod-configmaps-69e1136f-c7de-45c5-83c1-91a4220bb61c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.599811ms May 11 17:23:35.827: INFO: Pod "pod-configmaps-69e1136f-c7de-45c5-83c1-91a4220bb61c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10452744s May 11 17:23:37.887: INFO: Pod "pod-configmaps-69e1136f-c7de-45c5-83c1-91a4220bb61c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16447344s May 11 17:23:39.891: INFO: Pod "pod-configmaps-69e1136f-c7de-45c5-83c1-91a4220bb61c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167902435s STEP: Saw pod success May 11 17:23:39.891: INFO: Pod "pod-configmaps-69e1136f-c7de-45c5-83c1-91a4220bb61c" satisfied condition "success or failure" May 11 17:23:39.893: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-69e1136f-c7de-45c5-83c1-91a4220bb61c container env-test: STEP: delete the pod May 11 17:23:39.960: INFO: Waiting for pod pod-configmaps-69e1136f-c7de-45c5-83c1-91a4220bb61c to disappear May 11 17:23:39.970: INFO: Pod pod-configmaps-69e1136f-c7de-45c5-83c1-91a4220bb61c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:23:39.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6372" for this suite. May 11 17:23:47.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:23:48.063: INFO: namespace secrets-6372 deletion completed in 8.090178668s • [SLOW TEST:14.434 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:23:48.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-959 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 17:23:48.133: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 17:24:15.545: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.63:8080/dial?request=hostName&protocol=udp&host=10.244.1.127&port=8081&tries=1'] Namespace:pod-network-test-959 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:24:15.546: INFO: >>> kubeConfig: /root/.kube/config I0511 17:24:15.578713 7 log.go:172] (0xc001d2e6e0) (0xc0017da000) Create stream I0511 17:24:15.578739 7 log.go:172] (0xc001d2e6e0) (0xc0017da000) Stream added, broadcasting: 1 I0511 17:24:15.580218 7 log.go:172] (0xc001d2e6e0) Reply frame received for 1 I0511 17:24:15.580254 7 log.go:172] (0xc001d2e6e0) (0xc0005ca8c0) Create stream I0511 17:24:15.580266 7 log.go:172] (0xc001d2e6e0) (0xc0005ca8c0) Stream added, broadcasting: 3 I0511 17:24:15.581340 7 log.go:172] (0xc001d2e6e0) Reply frame received for 3 I0511 17:24:15.581383 7 log.go:172] (0xc001d2e6e0) (0xc0017da0a0) Create stream I0511 17:24:15.581407 7 log.go:172] (0xc001d2e6e0) (0xc0017da0a0) Stream added, broadcasting: 5 I0511 17:24:15.582276 7 log.go:172] (0xc001d2e6e0) Reply frame received for 5 I0511 17:24:15.644317 7 log.go:172] (0xc001d2e6e0) Data frame received for 3 I0511 17:24:15.644354 7 log.go:172] (0xc0005ca8c0) (3) Data frame handling I0511 17:24:15.644382 7 log.go:172] (0xc0005ca8c0) (3) Data frame sent I0511 17:24:15.645040 7 log.go:172] (0xc001d2e6e0) Data frame received for 3 I0511 17:24:15.645050 7 log.go:172] (0xc0005ca8c0) (3) Data frame handling I0511 17:24:15.645709 7 log.go:172] (0xc001d2e6e0) Data frame received for 5 I0511 17:24:15.645732 7 log.go:172] (0xc0017da0a0) (5) Data frame handling I0511 17:24:15.646994 7 log.go:172] (0xc001d2e6e0) Data frame received for 1 I0511 17:24:15.647003 7 log.go:172] (0xc0017da000) (1) Data frame handling I0511 17:24:15.647009 7 log.go:172] (0xc0017da000) (1) Data frame sent I0511 17:24:15.647098 7 log.go:172] (0xc001d2e6e0) (0xc0017da000) Stream removed, broadcasting: 1 I0511 17:24:15.647424 7 log.go:172] (0xc001d2e6e0) (0xc0017da000) Stream removed, broadcasting: 1 I0511 17:24:15.647448 7 log.go:172] (0xc001d2e6e0) (0xc0005ca8c0) Stream removed, broadcasting: 3 I0511 17:24:15.647458 7 log.go:172] (0xc001d2e6e0) (0xc0017da0a0) Stream removed, broadcasting: 5 May 11 17:24:15.647: INFO: Waiting for endpoints: map[] I0511 17:24:15.647567 7 log.go:172] (0xc001d2e6e0) Go away received May 11 17:24:15.650: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.63:8080/dial?request=hostName&protocol=udp&host=10.244.2.62&port=8081&tries=1'] Namespace:pod-network-test-959 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:24:15.650: INFO: >>> kubeConfig: /root/.kube/config I0511 17:24:15.699533 7 log.go:172] (0xc001d2f080) (0xc0017da320) Create stream I0511 17:24:15.699563 7 log.go:172] (0xc001d2f080) (0xc0017da320) Stream added, broadcasting: 1 I0511 17:24:15.700986 7 log.go:172] (0xc001d2f080) Reply frame received for 1 I0511 17:24:15.701018 7 log.go:172] (0xc001d2f080) (0xc0005cadc0) Create stream I0511 17:24:15.701031 7 log.go:172] (0xc001d2f080) (0xc0005cadc0) Stream added, broadcasting: 3 I0511 17:24:15.701820 7 log.go:172] (0xc001d2f080) Reply frame received for 3 I0511 17:24:15.701851 7 log.go:172] (0xc001d2f080) (0xc0019fc500) Create stream I0511 17:24:15.701859 7 log.go:172] (0xc001d2f080) (0xc0019fc500) Stream added, broadcasting: 5 I0511 17:24:15.702507 7 log.go:172] (0xc001d2f080) Reply frame received for 5 I0511 17:24:15.768070 7 log.go:172] (0xc001d2f080) Data frame received for 3 I0511 17:24:15.768085 7 log.go:172] (0xc0005cadc0) (3) Data frame handling I0511 17:24:15.768094 7 log.go:172] (0xc0005cadc0) (3) Data frame sent I0511 17:24:15.768790 7 log.go:172] (0xc001d2f080) Data frame received for 5 I0511 17:24:15.768822 7 log.go:172] (0xc0019fc500) (5) Data frame handling I0511 17:24:15.768847 7 log.go:172] (0xc001d2f080) Data frame received for 3 I0511 17:24:15.768882 7 log.go:172] (0xc0005cadc0) (3) Data frame handling I0511 17:24:15.770246 7 log.go:172] (0xc001d2f080) Data frame received for 1 I0511 17:24:15.770268 7 log.go:172] (0xc0017da320) (1) Data frame handling I0511 17:24:15.770282 7 log.go:172] (0xc0017da320) (1) Data frame sent I0511 17:24:15.770294 7 log.go:172] (0xc001d2f080) (0xc0017da320) Stream removed, broadcasting: 1 I0511 17:24:15.770308 7 log.go:172] (0xc001d2f080) Go away received I0511 17:24:15.770379 7 log.go:172] (0xc001d2f080) (0xc0017da320) Stream removed, broadcasting: 1 I0511 17:24:15.770402 7 log.go:172] (0xc001d2f080) (0xc0005cadc0) Stream removed, broadcasting: 3 I0511 17:24:15.770418 7 log.go:172] (0xc001d2f080) (0xc0019fc500) Stream removed, broadcasting: 5 May 11 17:24:15.770: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:24:15.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-959" for this suite. May 11 17:24:39.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:24:39.978: INFO: namespace pod-network-test-959 deletion completed in 24.198121984s • [SLOW TEST:51.915 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:24:39.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 17:24:40.130: INFO: Waiting up to 5m0s for pod "pod-5a2db4e0-e9ad-4b9d-885d-ff139523b7c7" in namespace "emptydir-8969" to be "success or failure" May 11 17:24:40.173: INFO: Pod "pod-5a2db4e0-e9ad-4b9d-885d-ff139523b7c7": Phase="Pending", Reason="", readiness=false. Elapsed: 43.29761ms May 11 17:24:42.513: INFO: Pod "pod-5a2db4e0-e9ad-4b9d-885d-ff139523b7c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383332033s May 11 17:24:44.583: INFO: Pod "pod-5a2db4e0-e9ad-4b9d-885d-ff139523b7c7": Phase="Running", Reason="", readiness=true. Elapsed: 4.453014611s May 11 17:24:46.587: INFO: Pod "pod-5a2db4e0-e9ad-4b9d-885d-ff139523b7c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.45715874s STEP: Saw pod success May 11 17:24:46.587: INFO: Pod "pod-5a2db4e0-e9ad-4b9d-885d-ff139523b7c7" satisfied condition "success or failure" May 11 17:24:46.590: INFO: Trying to get logs from node iruya-worker pod pod-5a2db4e0-e9ad-4b9d-885d-ff139523b7c7 container test-container: STEP: delete the pod May 11 17:24:46.867: INFO: Waiting for pod pod-5a2db4e0-e9ad-4b9d-885d-ff139523b7c7 to disappear May 11 17:24:46.996: INFO: Pod pod-5a2db4e0-e9ad-4b9d-885d-ff139523b7c7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:24:46.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8969" for this suite. May 11 17:24:53.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:24:53.274: INFO: namespace emptydir-8969 deletion completed in 6.273887814s • [SLOW TEST:13.295 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:24:53.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 11 17:24:53.773: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 17:24:53.793: INFO: Waiting for terminating namespaces to be deleted... May 11 17:24:53.796: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 11 17:24:53.800: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 17:24:53.800: INFO: Container kindnet-cni ready: true, restart count 0 May 11 17:24:53.800: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 17:24:53.800: INFO: Container kube-proxy ready: true, restart count 0 May 11 17:24:53.800: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 11 17:24:53.806: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 11 17:24:53.806: INFO: Container coredns ready: true, restart count 0 May 11 17:24:53.806: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 11 17:24:53.806: INFO: Container coredns ready: true, restart count 0 May 11 17:24:53.806: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 11 17:24:53.806: INFO: Container kube-proxy ready: true, restart count 0 May 11 17:24:53.806: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 11 17:24:53.806: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e0941225777f0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:24:54.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4419" for this suite. May 11 17:25:00.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:25:01.010: INFO: namespace sched-pred-4419 deletion completed in 6.140222737s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.736 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:25:01.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7931 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7931 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7931 May 11 17:25:01.763: INFO: Found 0 stateful pods, waiting for 1 May 11 17:25:11.767: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 11 17:25:11.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7931 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 17:25:12.004: INFO: stderr: "I0511 17:25:11.896974 83 log.go:172] (0xc000141340) (0xc0005ceaa0) Create stream\nI0511 17:25:11.897011 83 log.go:172] (0xc000141340) (0xc0005ceaa0) Stream added, broadcasting: 1\nI0511 17:25:11.898640 83 log.go:172] (0xc000141340) Reply frame received for 1\nI0511 17:25:11.898667 83 log.go:172] (0xc000141340) (0xc000a7c000) Create stream\nI0511 17:25:11.898681 83 log.go:172] (0xc000141340) (0xc000a7c000) Stream added, broadcasting: 3\nI0511 17:25:11.899340 83 log.go:172] (0xc000141340) Reply frame received for 3\nI0511 17:25:11.899379 83 log.go:172] (0xc000141340) (0xc000a4a000) Create stream\nI0511 17:25:11.899395 83 log.go:172] (0xc000141340) (0xc000a4a000) Stream added, broadcasting: 5\nI0511 17:25:11.900001 83 log.go:172] (0xc000141340) Reply frame received for 5\nI0511 17:25:11.975365 83 log.go:172] (0xc000141340) Data frame received for 5\nI0511 17:25:11.975397 83 log.go:172] (0xc000a4a000) (5) Data frame handling\nI0511 17:25:11.975422 83 log.go:172] (0xc000a4a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 17:25:11.998732 83 log.go:172] (0xc000141340) Data frame received for 3\nI0511 17:25:11.998766 83 log.go:172] (0xc000a7c000) (3) Data frame handling\nI0511 17:25:11.998785 83 log.go:172] (0xc000a7c000) (3) Data frame sent\nI0511 17:25:11.998804 83 log.go:172] (0xc000141340) Data frame received for 3\nI0511 17:25:11.998818 83 log.go:172] (0xc000a7c000) (3) Data frame handling\nI0511 17:25:11.998936 83 log.go:172] (0xc000141340) Data frame received for 5\nI0511 17:25:11.998956 83 log.go:172] (0xc000a4a000) (5) Data frame handling\nI0511 17:25:12.000756 83 log.go:172] (0xc000141340) Data frame received for 1\nI0511 17:25:12.000769 83 log.go:172] (0xc0005ceaa0) (1) Data frame handling\nI0511 17:25:12.000778 83 log.go:172] (0xc0005ceaa0) (1) Data frame sent\nI0511 17:25:12.000788 83 log.go:172] (0xc000141340) (0xc0005ceaa0) Stream removed, broadcasting: 1\nI0511 17:25:12.000847 83 log.go:172] (0xc000141340) Go away received\nI0511 17:25:12.001019 83 log.go:172] (0xc000141340) (0xc0005ceaa0) Stream removed, broadcasting: 1\nI0511 17:25:12.001030 83 log.go:172] (0xc000141340) (0xc000a7c000) Stream removed, broadcasting: 3\nI0511 17:25:12.001036 83 log.go:172] (0xc000141340) (0xc000a4a000) Stream removed, broadcasting: 5\n" May 11 17:25:12.004: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 17:25:12.004: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 17:25:12.007: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 17:25:22.278: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 17:25:22.278: INFO: Waiting for statefulset status.replicas updated to 0 May 11 17:25:22.314: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999636s May 11 17:25:23.320: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.973034937s May 11 17:25:24.323: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.967694678s May 11 17:25:25.327: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.964174286s May 11 17:25:26.336: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.959918737s May 11 17:25:27.339: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.951018983s May 11 17:25:28.343: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.947966893s May 11 17:25:29.348: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.943912501s May 11 17:25:30.353: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.939526608s May 11 17:25:31.357: INFO: Verifying statefulset ss doesn't scale past 1 for another 934.249167ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7931 May 11 17:25:32.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7931 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 17:25:32.597: INFO: stderr: "I0511 17:25:32.518659 103 log.go:172] (0xc00096a2c0) (0xc000830640) Create stream\nI0511 17:25:32.518712 103 log.go:172] (0xc00096a2c0) (0xc000830640) Stream added, broadcasting: 1\nI0511 17:25:32.520252 103 log.go:172] (0xc00096a2c0) Reply frame received for 1\nI0511 17:25:32.520291 103 log.go:172] (0xc00096a2c0) (0xc000922000) Create stream\nI0511 17:25:32.520303 103 log.go:172] (0xc00096a2c0) (0xc000922000) Stream added, broadcasting: 3\nI0511 17:25:32.521026 103 log.go:172] (0xc00096a2c0) Reply frame received for 3\nI0511 17:25:32.521047 103 log.go:172] (0xc00096a2c0) (0xc0005a2280) Create stream\nI0511 17:25:32.521055 103 log.go:172] (0xc00096a2c0) (0xc0005a2280) Stream added, broadcasting: 5\nI0511 17:25:32.521785 103 log.go:172] (0xc00096a2c0) Reply frame received for 5\nI0511 17:25:32.591332 103 log.go:172] (0xc00096a2c0) Data frame received for 3\nI0511 17:25:32.591353 103 log.go:172] (0xc000922000) (3) Data frame handling\nI0511 17:25:32.591362 103 log.go:172] (0xc000922000) (3) Data frame sent\nI0511 17:25:32.591368 103 log.go:172] (0xc00096a2c0) Data frame received for 3\nI0511 17:25:32.591373 103 log.go:172] (0xc000922000) (3) Data frame handling\nI0511 17:25:32.591637 103 log.go:172] (0xc00096a2c0) Data frame received for 5\nI0511 17:25:32.591648 103 log.go:172] (0xc0005a2280) (5) Data frame handling\nI0511 17:25:32.591659 103 log.go:172] (0xc0005a2280) (5) Data frame sent\nI0511 17:25:32.591668 103 log.go:172] (0xc00096a2c0) Data frame received for 5\nI0511 17:25:32.591672 103 log.go:172] (0xc0005a2280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 17:25:32.592766 103 log.go:172] (0xc00096a2c0) Data frame received for 1\nI0511 17:25:32.592793 103 log.go:172] (0xc000830640) (1) Data frame handling\nI0511 17:25:32.592822 103 log.go:172] (0xc000830640) (1) Data frame sent\nI0511 17:25:32.592843 103 log.go:172] (0xc00096a2c0) (0xc000830640) Stream removed, broadcasting: 1\nI0511 17:25:32.592875 103 log.go:172] (0xc00096a2c0) Go away received\nI0511 17:25:32.593326 103 log.go:172] (0xc00096a2c0) (0xc000830640) Stream removed, broadcasting: 1\nI0511 17:25:32.593347 103 log.go:172] (0xc00096a2c0) (0xc000922000) Stream removed, broadcasting: 3\nI0511 17:25:32.593364 103 log.go:172] (0xc00096a2c0) (0xc0005a2280) Stream removed, broadcasting: 5\n" May 11 17:25:32.597: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 17:25:32.597: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 17:25:32.600: INFO: Found 1 stateful pods, waiting for 3 May 11 17:25:42.604: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 17:25:42.605: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 17:25:42.605: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 17:25:52.603: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 17:25:52.603: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 17:25:52.603: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 11 17:25:52.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7931 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 17:25:52.809: INFO: stderr: "I0511 17:25:52.724910 124 log.go:172] (0xc000972420) (0xc00030e6e0) Create stream\nI0511 17:25:52.724958 124 log.go:172] (0xc000972420) (0xc00030e6e0) Stream added, broadcasting: 1\nI0511 17:25:52.727049 124 log.go:172] (0xc000972420) Reply frame received for 1\nI0511 17:25:52.727118 124 log.go:172] (0xc000972420) (0xc000884000) Create stream\nI0511 17:25:52.727156 124 log.go:172] (0xc000972420) (0xc000884000) Stream added, broadcasting: 3\nI0511 17:25:52.728213 124 log.go:172] (0xc000972420) Reply frame received for 3\nI0511 17:25:52.728238 124 log.go:172] (0xc000972420) (0xc0008840a0) Create stream\nI0511 17:25:52.728247 124 log.go:172] (0xc000972420) (0xc0008840a0) Stream added, broadcasting: 5\nI0511 17:25:52.728862 124 log.go:172] (0xc000972420) Reply frame received for 5\nI0511 17:25:52.805758 124 log.go:172] (0xc000972420) Data frame received for 5\nI0511 17:25:52.805779 124 log.go:172] (0xc0008840a0) (5) Data frame handling\nI0511 17:25:52.805788 124 log.go:172] (0xc0008840a0) (5) Data frame sent\nI0511 17:25:52.805799 124 log.go:172] (0xc000972420) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 17:25:52.805810 124 log.go:172] (0xc0008840a0) (5) Data frame handling\nI0511 17:25:52.805833 124 log.go:172] (0xc000972420) Data frame received for 3\nI0511 17:25:52.805844 124 log.go:172] (0xc000884000) (3) Data frame handling\nI0511 17:25:52.805852 124 log.go:172] (0xc000884000) (3) Data frame sent\nI0511 17:25:52.805863 124 log.go:172] (0xc000972420) Data frame received for 3\nI0511 17:25:52.805870 124 log.go:172] (0xc000884000) (3) Data frame handling\nI0511 17:25:52.806480 124 log.go:172] (0xc000972420) Data frame received for 1\nI0511 17:25:52.806503 124 log.go:172] (0xc00030e6e0) (1) Data frame handling\nI0511 17:25:52.806510 124 log.go:172] (0xc00030e6e0) (1) Data frame sent\nI0511 17:25:52.806525 124 log.go:172] (0xc000972420) (0xc00030e6e0) Stream removed, broadcasting: 1\nI0511 17:25:52.806538 124 log.go:172] (0xc000972420) Go away received\nI0511 17:25:52.806817 124 log.go:172] (0xc000972420) (0xc00030e6e0) Stream removed, broadcasting: 1\nI0511 17:25:52.806830 124 log.go:172] (0xc000972420) (0xc000884000) Stream removed, broadcasting: 3\nI0511 17:25:52.806836 124 log.go:172] (0xc000972420) (0xc0008840a0) Stream removed, broadcasting: 5\n" May 11 17:25:52.809: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 17:25:52.809: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 17:25:52.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7931 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 17:25:53.879: INFO: stderr: "I0511 17:25:52.945495 147 log.go:172] (0xc00096e370) (0xc00056eb40) Create stream\nI0511 17:25:52.945530 147 log.go:172] (0xc00096e370) (0xc00056eb40) Stream added, broadcasting: 1\nI0511 17:25:52.947181 147 log.go:172] (0xc00096e370) Reply frame received for 1\nI0511 17:25:52.947208 147 log.go:172] (0xc00096e370) (0xc000792000) Create stream\nI0511 17:25:52.947216 147 log.go:172] (0xc00096e370) (0xc000792000) Stream added, broadcasting: 3\nI0511 17:25:52.947997 147 log.go:172] (0xc00096e370) Reply frame received for 3\nI0511 17:25:52.948059 147 log.go:172] (0xc00096e370) (0xc0007f0000) Create stream\nI0511 17:25:52.948077 147 log.go:172] (0xc00096e370) (0xc0007f0000) Stream added, broadcasting: 5\nI0511 17:25:52.948799 147 log.go:172] (0xc00096e370) Reply frame received for 5\nI0511 17:25:53.002853 147 log.go:172] (0xc00096e370) Data frame received for 5\nI0511 17:25:53.002872 147 log.go:172] (0xc0007f0000) (5) Data frame handling\nI0511 17:25:53.002884 147 log.go:172] (0xc0007f0000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 17:25:53.873889 147 log.go:172] (0xc00096e370) Data frame received for 5\nI0511 17:25:53.873923 147 log.go:172] (0xc0007f0000) (5) Data frame handling\nI0511 17:25:53.873942 147 log.go:172] (0xc00096e370) Data frame received for 3\nI0511 17:25:53.873951 147 log.go:172] (0xc000792000) (3) Data frame handling\nI0511 17:25:53.873961 147 log.go:172] (0xc000792000) (3) Data frame sent\nI0511 17:25:53.873970 147 log.go:172] (0xc00096e370) Data frame received for 3\nI0511 17:25:53.873981 147 log.go:172] (0xc000792000) (3) Data frame handling\nI0511 17:25:53.875657 147 log.go:172] (0xc00096e370) Data frame received for 1\nI0511 17:25:53.875668 147 log.go:172] (0xc00056eb40) (1) Data frame handling\nI0511 17:25:53.875681 147 log.go:172] (0xc00056eb40) (1) Data frame sent\nI0511 17:25:53.875695 147 log.go:172] (0xc00096e370) (0xc00056eb40) Stream removed, broadcasting: 1\nI0511 17:25:53.875799 147 log.go:172] (0xc00096e370) Go away received\nI0511 17:25:53.876068 147 log.go:172] (0xc00096e370) (0xc00056eb40) Stream removed, broadcasting: 1\nI0511 17:25:53.876095 147 log.go:172] (0xc00096e370) (0xc000792000) Stream removed, broadcasting: 3\nI0511 17:25:53.876107 147 log.go:172] (0xc00096e370) (0xc0007f0000) Stream removed, broadcasting: 5\n" May 11 17:25:53.879: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 17:25:53.879: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 17:25:53.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7931 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 17:25:54.306: INFO: stderr: "I0511 17:25:54.151722 165 log.go:172] (0xc000476420) (0xc0002f6820) Create stream\nI0511 17:25:54.151760 165 log.go:172] (0xc000476420) (0xc0002f6820) Stream added, broadcasting: 1\nI0511 17:25:54.154388 165 log.go:172] (0xc000476420) Reply frame received for 1\nI0511 17:25:54.154418 165 log.go:172] (0xc000476420) (0xc0002f6000) Create stream\nI0511 17:25:54.154430 165 log.go:172] (0xc000476420) (0xc0002f6000) Stream added, broadcasting: 3\nI0511 17:25:54.154992 165 log.go:172] (0xc000476420) Reply frame received for 3\nI0511 17:25:54.155016 165 log.go:172] (0xc000476420) (0xc0002f6140) Create stream\nI0511 17:25:54.155026 165 log.go:172] (0xc000476420) (0xc0002f6140) Stream added, broadcasting: 5\nI0511 17:25:54.155578 165 log.go:172] (0xc000476420) Reply frame received for 5\nI0511 17:25:54.236283 165 log.go:172] (0xc000476420) Data frame received for 5\nI0511 17:25:54.236299 165 log.go:172] (0xc0002f6140) (5) Data frame handling\nI0511 17:25:54.236307 165 log.go:172] (0xc0002f6140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 17:25:54.300623 165 log.go:172] (0xc000476420) Data frame received for 3\nI0511 17:25:54.300661 165 log.go:172] (0xc0002f6000) (3) Data frame handling\nI0511 17:25:54.300670 165 log.go:172] (0xc0002f6000) (3) Data frame sent\nI0511 17:25:54.300690 165 log.go:172] (0xc000476420) Data frame received for 5\nI0511 17:25:54.300698 165 log.go:172] (0xc0002f6140) (5) Data frame handling\nI0511 17:25:54.300985 165 log.go:172] (0xc000476420) Data frame received for 3\nI0511 17:25:54.301006 165 log.go:172] (0xc0002f6000) (3) Data frame handling\nI0511 17:25:54.302194 165 log.go:172] (0xc000476420) Data frame received for 1\nI0511 17:25:54.302209 165 log.go:172] (0xc0002f6820) (1) Data frame handling\nI0511 17:25:54.302215 165 log.go:172] (0xc0002f6820) (1) Data frame sent\nI0511 17:25:54.302221 165 log.go:172] (0xc000476420) (0xc0002f6820) Stream removed, broadcasting: 1\nI0511 17:25:54.302228 165 log.go:172] (0xc000476420) Go away received\nI0511 17:25:54.302689 165 log.go:172] (0xc000476420) (0xc0002f6820) Stream removed, broadcasting: 1\nI0511 17:25:54.302720 165 log.go:172] (0xc000476420) (0xc0002f6000) Stream removed, broadcasting: 3\nI0511 17:25:54.302730 165 log.go:172] (0xc000476420) (0xc0002f6140) Stream removed, broadcasting: 5\n" May 11 17:25:54.306: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 17:25:54.306: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 17:25:54.306: INFO: Waiting for statefulset status.replicas updated to 0 May 11 17:25:54.309: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 11 17:26:04.318: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 17:26:04.318: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 17:26:04.318: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 17:26:04.333: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999398s May 11 17:26:05.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992787715s May 11 17:26:06.381: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986886863s May 11 17:26:07.385: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.944480447s May 11 17:26:08.389: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.94079119s May 11 17:26:09.579: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.936586871s May 11 17:26:10.587: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.746494457s May 11 17:26:11.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.738656274s May 11 17:26:12.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.733999645s May 11 17:26:13.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 729.137557ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7931 May 11 17:26:15.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7931 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 17:26:16.240: INFO: stderr: "I0511 17:26:16.132576 185 log.go:172] (0xc000a82370) (0xc0008a8640) Create stream\nI0511 17:26:16.132633 185 log.go:172] (0xc000a82370) (0xc0008a8640) Stream added, broadcasting: 1\nI0511 17:26:16.134949 185 log.go:172] (0xc000a82370) Reply frame received for 1\nI0511 17:26:16.134976 185 log.go:172] (0xc000a82370) (0xc00098c000) Create stream\nI0511 17:26:16.134996 185 log.go:172] (0xc000a82370) (0xc00098c000) Stream added, broadcasting: 3\nI0511 17:26:16.135671 185 log.go:172] (0xc000a82370) Reply frame received for 3\nI0511 17:26:16.135703 185 log.go:172] (0xc000a82370) (0xc0008a86e0) Create stream\nI0511 17:26:16.135715 185 log.go:172] (0xc000a82370) (0xc0008a86e0) Stream added, broadcasting: 5\nI0511 17:26:16.136287 185 log.go:172] (0xc000a82370) Reply frame received for 5\nI0511 17:26:16.236609 185 log.go:172] (0xc000a82370) Data frame received for 5\nI0511 17:26:16.236648 185 log.go:172] (0xc0008a86e0) (5) Data frame handling\nI0511 17:26:16.236660 185 log.go:172] (0xc0008a86e0) (5) Data frame sent\nI0511 17:26:16.236667 185 log.go:172] (0xc000a82370) Data frame received for 5\nI0511 17:26:16.236672 185 log.go:172] (0xc0008a86e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 17:26:16.236686 185 log.go:172] (0xc000a82370) Data frame received for 3\nI0511 17:26:16.236693 185 log.go:172] (0xc00098c000) (3) Data frame handling\nI0511 17:26:16.236699 185 log.go:172] (0xc00098c000) (3) Data frame sent\nI0511 17:26:16.236706 185 log.go:172] (0xc000a82370) Data frame received for 3\nI0511 17:26:16.236713 185 log.go:172] (0xc00098c000) (3) Data frame handling\nI0511 17:26:16.237656 185 log.go:172] (0xc000a82370) Data frame received for 1\nI0511 17:26:16.237675 185 log.go:172] (0xc0008a8640) (1) Data frame handling\nI0511 17:26:16.237684 185 log.go:172] (0xc0008a8640) (1) Data frame sent\nI0511 17:26:16.237701 185 log.go:172] (0xc000a82370) (0xc0008a8640) Stream removed, broadcasting: 1\nI0511 17:26:16.237713 185 log.go:172] (0xc000a82370) Go away received\nI0511 17:26:16.238054 185 log.go:172] (0xc000a82370) (0xc0008a8640) Stream removed, broadcasting: 1\nI0511 17:26:16.238069 185 log.go:172] (0xc000a82370) (0xc00098c000) Stream removed, broadcasting: 3\nI0511 17:26:16.238076 185 log.go:172] (0xc000a82370) (0xc0008a86e0) Stream removed, broadcasting: 5\n" May 11 17:26:16.240: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 17:26:16.240: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 17:26:16.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7931 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 17:26:16.416: INFO: stderr: "I0511 17:26:16.347029 206 log.go:172] (0xc0006eab00) (0xc000968640) Create stream\nI0511 17:26:16.347070 206 log.go:172] (0xc0006eab00) (0xc000968640) Stream added, broadcasting: 1\nI0511 17:26:16.349397 206 log.go:172] (0xc0006eab00) Reply frame received for 1\nI0511 17:26:16.349447 206 log.go:172] (0xc0006eab00) (0xc000884000) Create stream\nI0511 17:26:16.349466 206 log.go:172] (0xc0006eab00) (0xc000884000) Stream added, broadcasting: 3\nI0511 17:26:16.350195 206 log.go:172] (0xc0006eab00) Reply frame received for 3\nI0511 17:26:16.350214 206 log.go:172] (0xc0006eab00) (0xc0009686e0) Create stream\nI0511 17:26:16.350223 206 log.go:172] (0xc0006eab00) (0xc0009686e0) Stream added, broadcasting: 5\nI0511 17:26:16.350884 206 log.go:172] (0xc0006eab00) Reply frame received for 5\nI0511 17:26:16.412555 206 log.go:172] (0xc0006eab00) Data frame received for 3\nI0511 17:26:16.412576 206 log.go:172] (0xc000884000) (3) Data frame handling\nI0511 17:26:16.412586 206 log.go:172] (0xc000884000) (3) Data frame sent\nI0511 17:26:16.412660 206 log.go:172] (0xc0006eab00) Data frame received for 3\nI0511 17:26:16.412670 206 log.go:172] (0xc000884000) (3) Data frame handling\nI0511 17:26:16.412685 206 log.go:172] (0xc0006eab00) Data frame received for 5\nI0511 17:26:16.412693 206 log.go:172] (0xc0009686e0) (5) Data frame handling\nI0511 17:26:16.412701 206 log.go:172] (0xc0009686e0) (5) Data frame sent\nI0511 17:26:16.412709 206 log.go:172] (0xc0006eab00) Data frame received for 5\nI0511 17:26:16.412715 206 log.go:172] (0xc0009686e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 17:26:16.413886 206 log.go:172] (0xc0006eab00) Data frame received for 1\nI0511 17:26:16.413902 206 log.go:172] (0xc000968640) (1) Data frame handling\nI0511 17:26:16.413908 206 log.go:172] (0xc000968640) (1) Data frame sent\nI0511 17:26:16.413914 206 log.go:172] (0xc0006eab00) (0xc000968640) Stream removed, broadcasting: 1\nI0511 17:26:16.414060 206 log.go:172] (0xc0006eab00) Go away received\nI0511 17:26:16.414121 206 log.go:172] (0xc0006eab00) (0xc000968640) Stream removed, broadcasting: 1\nI0511 17:26:16.414136 206 log.go:172] (0xc0006eab00) (0xc000884000) Stream removed, broadcasting: 3\nI0511 17:26:16.414143 206 log.go:172] (0xc0006eab00) (0xc0009686e0) Stream removed, broadcasting: 5\n" May 11 17:26:16.416: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 17:26:16.416: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 17:26:16.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7931 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 17:26:16.789: INFO: stderr: "I0511 17:26:16.714891 226 log.go:172] (0xc000a8e0b0) (0xc000650500) Create stream\nI0511 17:26:16.714934 226 log.go:172] (0xc000a8e0b0) (0xc000650500) Stream added, broadcasting: 1\nI0511 17:26:16.718591 226 log.go:172] (0xc000a8e0b0) Reply frame received for 1\nI0511 17:26:16.718632 226 log.go:172] (0xc000a8e0b0) (0xc0006505a0) Create stream\nI0511 17:26:16.718644 226 log.go:172] (0xc000a8e0b0) (0xc0006505a0) Stream added, broadcasting: 3\nI0511 17:26:16.721869 226 log.go:172] (0xc000a8e0b0) Reply frame received for 3\nI0511 17:26:16.721904 226 log.go:172] (0xc000a8e0b0) (0xc00092a000) Create stream\nI0511 17:26:16.721917 226 log.go:172] (0xc000a8e0b0) (0xc00092a000) Stream added, broadcasting: 5\nI0511 17:26:16.724629 226 log.go:172] (0xc000a8e0b0) Reply frame received for 5\nI0511 17:26:16.785775 226 log.go:172] (0xc000a8e0b0) Data frame received for 5\nI0511 17:26:16.785798 226 log.go:172] (0xc00092a000) (5) Data frame handling\nI0511 17:26:16.785804 226 log.go:172] (0xc00092a000) (5) Data frame sent\nI0511 17:26:16.785808 226 log.go:172] (0xc000a8e0b0) Data frame received for 5\nI0511 17:26:16.785812 226 log.go:172] (0xc00092a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 17:26:16.785825 226 log.go:172] (0xc000a8e0b0) Data frame received for 3\nI0511 17:26:16.785829 226 log.go:172] (0xc0006505a0) (3) Data frame handling\nI0511 17:26:16.785834 226 log.go:172] (0xc0006505a0) (3) Data frame sent\nI0511 17:26:16.785837 226 log.go:172] (0xc000a8e0b0) Data frame received for 3\nI0511 17:26:16.785840 226 log.go:172] (0xc0006505a0) (3) Data frame handling\nI0511 17:26:16.786456 226 log.go:172] (0xc000a8e0b0) Data frame received for 1\nI0511 17:26:16.786467 226 log.go:172] (0xc000650500) (1) Data frame handling\nI0511 17:26:16.786474 226 log.go:172] (0xc000650500) (1) Data frame sent\nI0511 17:26:16.786485 226 log.go:172] (0xc000a8e0b0) (0xc000650500) Stream removed, broadcasting: 1\nI0511 17:26:16.786718 226 log.go:172] (0xc000a8e0b0) (0xc000650500) Stream removed, broadcasting: 1\nI0511 17:26:16.786727 226 log.go:172] (0xc000a8e0b0) (0xc0006505a0) Stream removed, broadcasting: 3\nI0511 17:26:16.786796 226 log.go:172] (0xc000a8e0b0) (0xc00092a000) Stream removed, broadcasting: 5\n" May 11 17:26:16.789: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 17:26:16.789: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 17:26:16.789: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 11 17:26:47.393: INFO: Deleting all statefulset in ns statefulset-7931 May 11 17:26:47.395: INFO: Scaling statefulset ss to 0 May 11 17:26:47.401: INFO: Waiting for statefulset status.replicas updated to 0 May 11 17:26:47.403: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:26:47.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7931" for this suite. May 11 17:26:53.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:26:53.601: INFO: namespace statefulset-7931 deletion completed in 6.185758284s • [SLOW TEST:112.591 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:26:53.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 17:26:53.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2452' May 11 17:26:53.853: INFO: stderr: "" May 11 17:26:53.853: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 11 17:26:53.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2452' May 11 17:27:01.879: INFO: stderr: "" May 11 17:27:01.879: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:27:01.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2452" for this suite. May 11 17:27:08.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:27:08.082: INFO: namespace kubectl-2452 deletion completed in 6.166506113s • [SLOW TEST:14.481 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:27:08.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 11 17:27:08.153: INFO: namespace kubectl-8391 May 11 17:27:08.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8391' May 11 17:27:08.501: INFO: stderr: "" May 11 17:27:08.501: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 11 17:27:09.507: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:09.507: INFO: Found 0 / 1 May 11 17:27:10.505: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:10.505: INFO: Found 0 / 1 May 11 17:27:11.505: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:11.505: INFO: Found 0 / 1 May 11 17:27:12.504: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:12.504: INFO: Found 1 / 1 May 11 17:27:12.504: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 17:27:12.506: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:12.506: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 17:27:12.506: INFO: wait on redis-master startup in kubectl-8391 May 11 17:27:12.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-b5zdx redis-master --namespace=kubectl-8391' May 11 17:27:12.664: INFO: stderr: "" May 11 17:27:12.664: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 May 17:27:12.046 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 May 17:27:12.046 # Server started, Redis version 3.2.12\n1:M 11 May 17:27:12.046 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 May 17:27:12.046 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 11 17:27:12.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8391' May 11 17:27:12.814: INFO: stderr: "" May 11 17:27:12.814: INFO: stdout: "service/rm2 exposed\n" May 11 17:27:12.884: INFO: Service rm2 in namespace kubectl-8391 found. STEP: exposing service May 11 17:27:14.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8391' May 11 17:27:15.167: INFO: stderr: "" May 11 17:27:15.167: INFO: stdout: "service/rm3 exposed\n" May 11 17:27:15.176: INFO: Service rm3 in namespace kubectl-8391 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:27:17.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8391" for this suite. May 11 17:27:41.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:27:41.581: INFO: namespace kubectl-8391 deletion completed in 24.074478847s • [SLOW TEST:33.497 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:27:41.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 11 17:27:42.202: INFO: created pod pod-service-account-defaultsa May 11 17:27:42.202: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 11 17:27:42.219: INFO: created pod pod-service-account-mountsa May 11 17:27:42.219: INFO: pod pod-service-account-mountsa service account token volume mount: true May 11 17:27:42.249: INFO: created pod pod-service-account-nomountsa May 11 17:27:42.249: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 11 17:27:42.267: INFO: created pod pod-service-account-defaultsa-mountspec May 11 17:27:42.267: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 11 17:27:42.335: INFO: created pod pod-service-account-mountsa-mountspec May 11 17:27:42.335: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 11 17:27:42.356: INFO: created pod pod-service-account-nomountsa-mountspec May 11 17:27:42.356: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 11 17:27:42.406: INFO: created pod pod-service-account-defaultsa-nomountspec May 11 17:27:42.406: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 11 17:27:42.419: INFO: created pod pod-service-account-mountsa-nomountspec May 11 17:27:42.419: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 11 17:27:42.492: INFO: created pod pod-service-account-nomountsa-nomountspec May 11 17:27:42.492: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:27:42.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4321" for this suite. May 11 17:28:18.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:28:18.739: INFO: namespace svcaccounts-4321 deletion completed in 36.206015036s • [SLOW TEST:37.158 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:28:18.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-cc1d793e-6895-40d0-96fd-c66c217eec4f [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:28:18.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1907" for this suite. May 11 17:28:24.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:28:24.998: INFO: namespace secrets-1907 deletion completed in 6.070433576s • [SLOW TEST:6.259 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:28:24.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 17:28:25.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3092' May 11 17:28:25.318: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 17:28:25.318: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller May 11 17:28:25.326: INFO: scanned /root for discovery docs: May 11 17:28:25.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3092' May 11 17:28:43.590: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 11 17:28:43.590: INFO: stdout: "Created e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047\nScaling up e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 11 17:28:43.590: INFO: stdout: "Created e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047\nScaling up e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 11 17:28:43.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3092' May 11 17:28:43.686: INFO: stderr: "" May 11 17:28:43.687: INFO: stdout: "e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047-vkf5q " May 11 17:28:43.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047-vkf5q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3092' May 11 17:28:43.770: INFO: stderr: "" May 11 17:28:43.771: INFO: stdout: "true" May 11 17:28:43.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047-vkf5q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3092' May 11 17:28:43.860: INFO: stderr: "" May 11 17:28:43.860: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 11 17:28:43.860: INFO: e2e-test-nginx-rc-b77314a9d7eb06d01677ec90874aa047-vkf5q is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 11 17:28:43.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3092' May 11 17:28:43.984: INFO: stderr: "" May 11 17:28:43.984: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:28:43.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3092" for this suite. May 11 17:29:06.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:29:06.176: INFO: namespace kubectl-3092 deletion completed in 22.176793879s • [SLOW TEST:41.178 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:29:06.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 17:29:06.375: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0cf30e8b-13fe-4b77-bbdf-b6abe3c33850" in namespace "projected-3993" to be "success or failure" May 11 17:29:06.379: INFO: Pod "downwardapi-volume-0cf30e8b-13fe-4b77-bbdf-b6abe3c33850": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056358ms May 11 17:29:08.683: INFO: Pod "downwardapi-volume-0cf30e8b-13fe-4b77-bbdf-b6abe3c33850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308071089s May 11 17:29:10.700: INFO: Pod "downwardapi-volume-0cf30e8b-13fe-4b77-bbdf-b6abe3c33850": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325892133s May 11 17:29:12.704: INFO: Pod "downwardapi-volume-0cf30e8b-13fe-4b77-bbdf-b6abe3c33850": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.329667313s STEP: Saw pod success May 11 17:29:12.704: INFO: Pod "downwardapi-volume-0cf30e8b-13fe-4b77-bbdf-b6abe3c33850" satisfied condition "success or failure" May 11 17:29:12.707: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0cf30e8b-13fe-4b77-bbdf-b6abe3c33850 container client-container: STEP: delete the pod May 11 17:29:12.773: INFO: Waiting for pod downwardapi-volume-0cf30e8b-13fe-4b77-bbdf-b6abe3c33850 to disappear May 11 17:29:12.828: INFO: Pod downwardapi-volume-0cf30e8b-13fe-4b77-bbdf-b6abe3c33850 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:29:12.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3993" for this suite. May 11 17:29:18.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:29:19.047: INFO: namespace projected-3993 deletion completed in 6.215284253s • [SLOW TEST:12.870 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:29:19.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 17:29:19.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5390' May 11 17:29:19.604: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 17:29:19.604: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 11 17:29:19.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5390' May 11 17:29:20.676: INFO: stderr: "" May 11 17:29:20.676: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:29:20.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5390" for this suite. May 11 17:29:43.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:29:43.136: INFO: namespace kubectl-5390 deletion completed in 22.230260148s • [SLOW TEST:24.088 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:29:43.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-cv7h STEP: Creating a pod to test atomic-volume-subpath May 11 17:29:43.206: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cv7h" in namespace "subpath-8219" to be "success or failure" May 11 17:29:43.223: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Pending", Reason="", readiness=false. Elapsed: 16.48927ms May 11 17:29:45.226: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019290135s May 11 17:29:47.229: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Running", Reason="", readiness=true. Elapsed: 4.023057771s May 11 17:29:49.239: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Running", Reason="", readiness=true. Elapsed: 6.033008859s May 11 17:29:51.244: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Running", Reason="", readiness=true. Elapsed: 8.037320859s May 11 17:29:53.248: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Running", Reason="", readiness=true. Elapsed: 10.041998497s May 11 17:29:55.354: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Running", Reason="", readiness=true. Elapsed: 12.14806625s May 11 17:29:57.357: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Running", Reason="", readiness=true. Elapsed: 14.150717007s May 11 17:29:59.359: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Running", Reason="", readiness=true. Elapsed: 16.152950165s May 11 17:30:01.363: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Running", Reason="", readiness=true. Elapsed: 18.157228233s May 11 17:30:03.366: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Running", Reason="", readiness=true. Elapsed: 20.159924725s May 11 17:30:05.369: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Running", Reason="", readiness=true. Elapsed: 22.162968958s May 11 17:30:07.373: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Running", Reason="", readiness=true. Elapsed: 24.167058186s May 11 17:30:09.376: INFO: Pod "pod-subpath-test-projected-cv7h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.170059849s STEP: Saw pod success May 11 17:30:09.376: INFO: Pod "pod-subpath-test-projected-cv7h" satisfied condition "success or failure" May 11 17:30:09.379: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-cv7h container test-container-subpath-projected-cv7h: STEP: delete the pod May 11 17:30:09.399: INFO: Waiting for pod pod-subpath-test-projected-cv7h to disappear May 11 17:30:09.403: INFO: Pod pod-subpath-test-projected-cv7h no longer exists STEP: Deleting pod pod-subpath-test-projected-cv7h May 11 17:30:09.403: INFO: Deleting pod "pod-subpath-test-projected-cv7h" in namespace "subpath-8219" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:30:09.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8219" for this suite. May 11 17:30:15.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:30:15.740: INFO: namespace subpath-8219 deletion completed in 6.331588452s • [SLOW TEST:32.604 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:30:15.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 11 17:30:16.384: INFO: Pod name pod-release: Found 0 pods out of 1 May 11 17:30:21.387: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:30:22.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2535" for this suite. May 11 17:30:28.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:30:29.740: INFO: namespace replication-controller-2535 deletion completed in 7.336555291s • [SLOW TEST:14.000 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:30:29.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 11 17:30:30.854: INFO: Waiting up to 5m0s for pod "var-expansion-a883a0e5-27d0-459c-ae1e-36241f827ac7" in namespace "var-expansion-9605" to be "success or failure" May 11 17:30:31.025: INFO: Pod "var-expansion-a883a0e5-27d0-459c-ae1e-36241f827ac7": Phase="Pending", Reason="", readiness=false. Elapsed: 170.622343ms May 11 17:30:33.029: INFO: Pod "var-expansion-a883a0e5-27d0-459c-ae1e-36241f827ac7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175084901s May 11 17:30:35.033: INFO: Pod "var-expansion-a883a0e5-27d0-459c-ae1e-36241f827ac7": Phase="Running", Reason="", readiness=true. Elapsed: 4.178938434s May 11 17:30:37.036: INFO: Pod "var-expansion-a883a0e5-27d0-459c-ae1e-36241f827ac7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.181196441s STEP: Saw pod success May 11 17:30:37.036: INFO: Pod "var-expansion-a883a0e5-27d0-459c-ae1e-36241f827ac7" satisfied condition "success or failure" May 11 17:30:37.038: INFO: Trying to get logs from node iruya-worker pod var-expansion-a883a0e5-27d0-459c-ae1e-36241f827ac7 container dapi-container: STEP: delete the pod May 11 17:30:37.265: INFO: Waiting for pod var-expansion-a883a0e5-27d0-459c-ae1e-36241f827ac7 to disappear May 11 17:30:37.295: INFO: Pod var-expansion-a883a0e5-27d0-459c-ae1e-36241f827ac7 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:30:37.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9605" for this suite. May 11 17:30:45.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:30:45.623: INFO: namespace var-expansion-9605 deletion completed in 8.325793237s • [SLOW TEST:15.882 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:30:45.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 11 17:30:45.729: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 17:30:45.743: INFO: Waiting for terminating namespaces to be deleted... May 11 17:30:45.745: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 11 17:30:45.749: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 17:30:45.749: INFO: Container kube-proxy ready: true, restart count 0 May 11 17:30:45.749: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 17:30:45.749: INFO: Container kindnet-cni ready: true, restart count 0 May 11 17:30:45.749: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 11 17:30:45.754: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 11 17:30:45.754: INFO: Container kindnet-cni ready: true, restart count 0 May 11 17:30:45.754: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 11 17:30:45.754: INFO: Container kube-proxy ready: true, restart count 0 May 11 17:30:45.754: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 11 17:30:45.754: INFO: Container coredns ready: true, restart count 0 May 11 17:30:45.754: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 11 17:30:45.754: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 11 17:30:45.844: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 11 17:30:45.844: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 11 17:30:45.844: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 11 17:30:45.844: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 11 17:30:45.844: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 11 17:30:45.844: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-939b432d-3d66-4c0d-a28a-ff88aeb9fb40.160e0993170a915d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8122/filler-pod-939b432d-3d66-4c0d-a28a-ff88aeb9fb40 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-939b432d-3d66-4c0d-a28a-ff88aeb9fb40.160e0993dad1d8ba], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-939b432d-3d66-4c0d-a28a-ff88aeb9fb40.160e0994b0e6f56e], Reason = [Created], Message = [Created container filler-pod-939b432d-3d66-4c0d-a28a-ff88aeb9fb40] STEP: Considering event: Type = [Normal], Name = [filler-pod-939b432d-3d66-4c0d-a28a-ff88aeb9fb40.160e0994c443265e], Reason = [Started], Message = [Started container filler-pod-939b432d-3d66-4c0d-a28a-ff88aeb9fb40] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffff84c5-3a7a-4d53-b993-406002493e29.160e09931763299c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8122/filler-pod-ffff84c5-3a7a-4d53-b993-406002493e29 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffff84c5-3a7a-4d53-b993-406002493e29.160e09936092c834], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffff84c5-3a7a-4d53-b993-406002493e29.160e0994584bc0ea], Reason = [Created], Message = [Created container filler-pod-ffff84c5-3a7a-4d53-b993-406002493e29] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffff84c5-3a7a-4d53-b993-406002493e29.160e0994969ef081], Reason = [Started], Message = [Started container filler-pod-ffff84c5-3a7a-4d53-b993-406002493e29] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e0994f51413bb], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:30:56.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8122" for this suite. May 11 17:31:05.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:31:05.551: INFO: namespace sched-pred-8122 deletion completed in 8.745448543s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:19.927 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:31:05.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-39b14b19-9b06-47ad-9c85-84fd867dbf56 STEP: Creating a pod to test consume secrets May 11 17:31:05.704: INFO: Waiting up to 5m0s for pod "pod-secrets-399a04b6-ca63-48f8-adfe-04809aa7e770" in namespace "secrets-6907" to be "success or failure" May 11 17:31:05.709: INFO: Pod "pod-secrets-399a04b6-ca63-48f8-adfe-04809aa7e770": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036056ms May 11 17:31:07.712: INFO: Pod "pod-secrets-399a04b6-ca63-48f8-adfe-04809aa7e770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007705989s May 11 17:31:09.882: INFO: Pod "pod-secrets-399a04b6-ca63-48f8-adfe-04809aa7e770": Phase="Running", Reason="", readiness=true. Elapsed: 4.177650003s May 11 17:31:11.886: INFO: Pod "pod-secrets-399a04b6-ca63-48f8-adfe-04809aa7e770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.181296939s STEP: Saw pod success May 11 17:31:11.886: INFO: Pod "pod-secrets-399a04b6-ca63-48f8-adfe-04809aa7e770" satisfied condition "success or failure" May 11 17:31:11.888: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-399a04b6-ca63-48f8-adfe-04809aa7e770 container secret-volume-test: STEP: delete the pod May 11 17:31:11.907: INFO: Waiting for pod pod-secrets-399a04b6-ca63-48f8-adfe-04809aa7e770 to disappear May 11 17:31:11.912: INFO: Pod pod-secrets-399a04b6-ca63-48f8-adfe-04809aa7e770 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:31:11.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6907" for this suite. May 11 17:31:21.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:31:22.005: INFO: namespace secrets-6907 deletion completed in 10.090232333s • [SLOW TEST:16.454 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:31:22.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 11 17:31:22.463: INFO: Waiting up to 5m0s for pod "var-expansion-09002c69-d730-4603-8686-09d6d2ca892c" in namespace "var-expansion-5371" to be "success or failure" May 11 17:31:22.954: INFO: Pod "var-expansion-09002c69-d730-4603-8686-09d6d2ca892c": Phase="Pending", Reason="", readiness=false. Elapsed: 491.174546ms May 11 17:31:24.957: INFO: Pod "var-expansion-09002c69-d730-4603-8686-09d6d2ca892c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.494196895s May 11 17:31:27.019: INFO: Pod "var-expansion-09002c69-d730-4603-8686-09d6d2ca892c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.556759714s May 11 17:31:29.023: INFO: Pod "var-expansion-09002c69-d730-4603-8686-09d6d2ca892c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.560027757s STEP: Saw pod success May 11 17:31:29.023: INFO: Pod "var-expansion-09002c69-d730-4603-8686-09d6d2ca892c" satisfied condition "success or failure" May 11 17:31:29.026: INFO: Trying to get logs from node iruya-worker pod var-expansion-09002c69-d730-4603-8686-09d6d2ca892c container dapi-container: STEP: delete the pod May 11 17:31:29.458: INFO: Waiting for pod var-expansion-09002c69-d730-4603-8686-09d6d2ca892c to disappear May 11 17:31:29.513: INFO: Pod var-expansion-09002c69-d730-4603-8686-09d6d2ca892c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:31:29.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5371" for this suite. May 11 17:31:37.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:31:37.736: INFO: namespace var-expansion-5371 deletion completed in 8.218533012s • [SLOW TEST:15.730 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:31:37.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:31:44.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9980" for this suite. May 11 17:32:34.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:32:34.383: INFO: namespace kubelet-test-9980 deletion completed in 50.292851849s • [SLOW TEST:56.647 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:32:34.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 17:32:35.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5ce48b2-070f-474c-851a-e166eaa630c1" in namespace "downward-api-5951" to be "success or failure" May 11 17:32:35.309: INFO: Pod "downwardapi-volume-d5ce48b2-070f-474c-851a-e166eaa630c1": Phase="Pending", Reason="", readiness=false. Elapsed: 240.588802ms May 11 17:32:37.311: INFO: Pod "downwardapi-volume-d5ce48b2-070f-474c-851a-e166eaa630c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243080145s May 11 17:32:39.341: INFO: Pod "downwardapi-volume-d5ce48b2-070f-474c-851a-e166eaa630c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27292044s May 11 17:32:41.345: INFO: Pod "downwardapi-volume-d5ce48b2-070f-474c-851a-e166eaa630c1": Phase="Running", Reason="", readiness=true. Elapsed: 6.276743963s May 11 17:32:43.348: INFO: Pod "downwardapi-volume-d5ce48b2-070f-474c-851a-e166eaa630c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.279604807s STEP: Saw pod success May 11 17:32:43.348: INFO: Pod "downwardapi-volume-d5ce48b2-070f-474c-851a-e166eaa630c1" satisfied condition "success or failure" May 11 17:32:43.350: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d5ce48b2-070f-474c-851a-e166eaa630c1 container client-container: STEP: delete the pod May 11 17:32:43.578: INFO: Waiting for pod downwardapi-volume-d5ce48b2-070f-474c-851a-e166eaa630c1 to disappear May 11 17:32:43.598: INFO: Pod downwardapi-volume-d5ce48b2-070f-474c-851a-e166eaa630c1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:32:43.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5951" for this suite. May 11 17:32:51.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:32:51.685: INFO: namespace downward-api-5951 deletion completed in 8.08474107s • [SLOW TEST:17.302 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:32:51.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 17:32:51.810: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d53d33d-b36d-418f-9208-6e2b9d10a228" in namespace "projected-5279" to be "success or failure" May 11 17:32:51.838: INFO: Pod "downwardapi-volume-9d53d33d-b36d-418f-9208-6e2b9d10a228": Phase="Pending", Reason="", readiness=false. Elapsed: 27.622897ms May 11 17:32:53.907: INFO: Pod "downwardapi-volume-9d53d33d-b36d-418f-9208-6e2b9d10a228": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096884299s May 11 17:32:56.117: INFO: Pod "downwardapi-volume-9d53d33d-b36d-418f-9208-6e2b9d10a228": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306613903s May 11 17:32:58.120: INFO: Pod "downwardapi-volume-9d53d33d-b36d-418f-9208-6e2b9d10a228": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.309441451s STEP: Saw pod success May 11 17:32:58.120: INFO: Pod "downwardapi-volume-9d53d33d-b36d-418f-9208-6e2b9d10a228" satisfied condition "success or failure" May 11 17:32:58.122: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9d53d33d-b36d-418f-9208-6e2b9d10a228 container client-container: STEP: delete the pod May 11 17:32:58.138: INFO: Waiting for pod downwardapi-volume-9d53d33d-b36d-418f-9208-6e2b9d10a228 to disappear May 11 17:32:58.142: INFO: Pod downwardapi-volume-9d53d33d-b36d-418f-9208-6e2b9d10a228 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:32:58.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5279" for this suite. May 11 17:33:04.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:33:04.265: INFO: namespace projected-5279 deletion completed in 6.119292342s • [SLOW TEST:12.579 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:33:04.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 11 17:33:04.687: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:33:18.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8115" for this suite. May 11 17:33:48.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:33:49.556: INFO: namespace init-container-8115 deletion completed in 30.665875269s • [SLOW TEST:45.290 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:33:49.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 17:33:50.211: INFO: Waiting up to 5m0s for pod "pod-4dde46d3-b4c6-45ca-978d-55597473955e" in namespace "emptydir-5704" to be "success or failure" May 11 17:33:50.246: INFO: Pod "pod-4dde46d3-b4c6-45ca-978d-55597473955e": Phase="Pending", Reason="", readiness=false. Elapsed: 34.602567ms May 11 17:33:52.249: INFO: Pod "pod-4dde46d3-b4c6-45ca-978d-55597473955e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037519892s May 11 17:33:54.295: INFO: Pod "pod-4dde46d3-b4c6-45ca-978d-55597473955e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083095045s May 11 17:33:56.298: INFO: Pod "pod-4dde46d3-b4c6-45ca-978d-55597473955e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086586398s STEP: Saw pod success May 11 17:33:56.298: INFO: Pod "pod-4dde46d3-b4c6-45ca-978d-55597473955e" satisfied condition "success or failure" May 11 17:33:56.300: INFO: Trying to get logs from node iruya-worker pod pod-4dde46d3-b4c6-45ca-978d-55597473955e container test-container: STEP: delete the pod May 11 17:33:56.640: INFO: Waiting for pod pod-4dde46d3-b4c6-45ca-978d-55597473955e to disappear May 11 17:33:56.701: INFO: Pod pod-4dde46d3-b4c6-45ca-978d-55597473955e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:33:56.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5704" for this suite. May 11 17:34:03.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:34:03.528: INFO: namespace emptydir-5704 deletion completed in 6.823950184s • [SLOW TEST:13.971 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:34:03.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 17:34:03.828: INFO: Waiting up to 5m0s for pod "pod-61f76e2a-1aca-4dfd-a48d-e6d5f78455f8" in namespace "emptydir-8147" to be "success or failure" May 11 17:34:03.846: INFO: Pod "pod-61f76e2a-1aca-4dfd-a48d-e6d5f78455f8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.91739ms May 11 17:34:05.850: INFO: Pod "pod-61f76e2a-1aca-4dfd-a48d-e6d5f78455f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022689062s May 11 17:34:07.854: INFO: Pod "pod-61f76e2a-1aca-4dfd-a48d-e6d5f78455f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025815403s May 11 17:34:10.031: INFO: Pod "pod-61f76e2a-1aca-4dfd-a48d-e6d5f78455f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203055525s May 11 17:34:12.141: INFO: Pod "pod-61f76e2a-1aca-4dfd-a48d-e6d5f78455f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.313565737s STEP: Saw pod success May 11 17:34:12.141: INFO: Pod "pod-61f76e2a-1aca-4dfd-a48d-e6d5f78455f8" satisfied condition "success or failure" May 11 17:34:12.144: INFO: Trying to get logs from node iruya-worker pod pod-61f76e2a-1aca-4dfd-a48d-e6d5f78455f8 container test-container: STEP: delete the pod May 11 17:34:12.373: INFO: Waiting for pod pod-61f76e2a-1aca-4dfd-a48d-e6d5f78455f8 to disappear May 11 17:34:12.556: INFO: Pod pod-61f76e2a-1aca-4dfd-a48d-e6d5f78455f8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:34:12.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8147" for this suite. May 11 17:34:18.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:34:18.683: INFO: namespace emptydir-8147 deletion completed in 6.123121174s • [SLOW TEST:15.154 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:34:18.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-3f4fa57a-ccdc-487d-baaf-0a3865efb9b6 STEP: Creating a pod to test consume configMaps May 11 17:34:18.878: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a54fa1b-122f-43aa-ba99-a1234ec63aac" in namespace "configmap-1483" to be "success or failure" May 11 17:34:18.932: INFO: Pod "pod-configmaps-0a54fa1b-122f-43aa-ba99-a1234ec63aac": Phase="Pending", Reason="", readiness=false. Elapsed: 53.871338ms May 11 17:34:20.936: INFO: Pod "pod-configmaps-0a54fa1b-122f-43aa-ba99-a1234ec63aac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058377209s May 11 17:34:22.940: INFO: Pod "pod-configmaps-0a54fa1b-122f-43aa-ba99-a1234ec63aac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06204081s STEP: Saw pod success May 11 17:34:22.940: INFO: Pod "pod-configmaps-0a54fa1b-122f-43aa-ba99-a1234ec63aac" satisfied condition "success or failure" May 11 17:34:22.942: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-0a54fa1b-122f-43aa-ba99-a1234ec63aac container configmap-volume-test: STEP: delete the pod May 11 17:34:23.063: INFO: Waiting for pod pod-configmaps-0a54fa1b-122f-43aa-ba99-a1234ec63aac to disappear May 11 17:34:23.067: INFO: Pod pod-configmaps-0a54fa1b-122f-43aa-ba99-a1234ec63aac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:34:23.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1483" for this suite. May 11 17:34:31.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:34:31.228: INFO: namespace configmap-1483 deletion completed in 8.073530242s • [SLOW TEST:12.545 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:34:31.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 17:34:31.422: INFO: Waiting up to 5m0s for pod "pod-d67ea3c2-91be-4836-8969-82146047cd10" in namespace "emptydir-198" to be "success or failure" May 11 17:34:31.573: INFO: Pod "pod-d67ea3c2-91be-4836-8969-82146047cd10": Phase="Pending", Reason="", readiness=false. Elapsed: 150.035504ms May 11 17:34:33.576: INFO: Pod "pod-d67ea3c2-91be-4836-8969-82146047cd10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153746408s May 11 17:34:35.580: INFO: Pod "pod-d67ea3c2-91be-4836-8969-82146047cd10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157388869s May 11 17:34:37.584: INFO: Pod "pod-d67ea3c2-91be-4836-8969-82146047cd10": Phase="Running", Reason="", readiness=true. Elapsed: 6.161529825s May 11 17:34:39.591: INFO: Pod "pod-d67ea3c2-91be-4836-8969-82146047cd10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.16836188s STEP: Saw pod success May 11 17:34:39.591: INFO: Pod "pod-d67ea3c2-91be-4836-8969-82146047cd10" satisfied condition "success or failure" May 11 17:34:39.593: INFO: Trying to get logs from node iruya-worker2 pod pod-d67ea3c2-91be-4836-8969-82146047cd10 container test-container: STEP: delete the pod May 11 17:34:39.650: INFO: Waiting for pod pod-d67ea3c2-91be-4836-8969-82146047cd10 to disappear May 11 17:34:39.654: INFO: Pod pod-d67ea3c2-91be-4836-8969-82146047cd10 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:34:39.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-198" for this suite. May 11 17:34:45.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:34:46.119: INFO: namespace emptydir-198 deletion completed in 6.463530514s • [SLOW TEST:14.891 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:34:46.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-be310fde-8ef4-416b-bd6f-c7d11492402f STEP: Creating a pod to test consume configMaps May 11 17:34:46.420: INFO: Waiting up to 5m0s for pod "pod-configmaps-c68259b7-cc7c-4be8-8429-630e819817d9" in namespace "configmap-5279" to be "success or failure" May 11 17:34:46.464: INFO: Pod "pod-configmaps-c68259b7-cc7c-4be8-8429-630e819817d9": Phase="Pending", Reason="", readiness=false. Elapsed: 43.456938ms May 11 17:34:48.468: INFO: Pod "pod-configmaps-c68259b7-cc7c-4be8-8429-630e819817d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047409444s May 11 17:34:50.472: INFO: Pod "pod-configmaps-c68259b7-cc7c-4be8-8429-630e819817d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052010627s May 11 17:34:52.476: INFO: Pod "pod-configmaps-c68259b7-cc7c-4be8-8429-630e819817d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055882697s STEP: Saw pod success May 11 17:34:52.476: INFO: Pod "pod-configmaps-c68259b7-cc7c-4be8-8429-630e819817d9" satisfied condition "success or failure" May 11 17:34:52.479: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-c68259b7-cc7c-4be8-8429-630e819817d9 container configmap-volume-test: STEP: delete the pod May 11 17:34:53.307: INFO: Waiting for pod pod-configmaps-c68259b7-cc7c-4be8-8429-630e819817d9 to disappear May 11 17:34:53.391: INFO: Pod pod-configmaps-c68259b7-cc7c-4be8-8429-630e819817d9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:34:53.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5279" for this suite. May 11 17:34:59.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:34:59.607: INFO: namespace configmap-5279 deletion completed in 6.211586126s • [SLOW TEST:13.486 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:34:59.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-656q STEP: Creating a pod to test atomic-volume-subpath May 11 17:34:59.847: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-656q" in namespace "subpath-9865" to be "success or failure" May 11 17:34:59.852: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Pending", Reason="", readiness=false. Elapsed: 5.032612ms May 11 17:35:01.913: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06612675s May 11 17:35:03.918: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070687956s May 11 17:35:06.023: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Running", Reason="", readiness=true. Elapsed: 6.175892436s May 11 17:35:08.026: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Running", Reason="", readiness=true. Elapsed: 8.179012511s May 11 17:35:10.143: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Running", Reason="", readiness=true. Elapsed: 10.295866705s May 11 17:35:12.188: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Running", Reason="", readiness=true. Elapsed: 12.341300802s May 11 17:35:14.192: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Running", Reason="", readiness=true. Elapsed: 14.345143287s May 11 17:35:16.196: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Running", Reason="", readiness=true. Elapsed: 16.348980636s May 11 17:35:18.200: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Running", Reason="", readiness=true. Elapsed: 18.353000893s May 11 17:35:20.203: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Running", Reason="", readiness=true. Elapsed: 20.356123519s May 11 17:35:22.245: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Running", Reason="", readiness=true. Elapsed: 22.398088492s May 11 17:35:24.374: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Running", Reason="", readiness=true. Elapsed: 24.52715657s May 11 17:35:26.377: INFO: Pod "pod-subpath-test-configmap-656q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.529779528s STEP: Saw pod success May 11 17:35:26.377: INFO: Pod "pod-subpath-test-configmap-656q" satisfied condition "success or failure" May 11 17:35:26.378: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-656q container test-container-subpath-configmap-656q: STEP: delete the pod May 11 17:35:26.417: INFO: Waiting for pod pod-subpath-test-configmap-656q to disappear May 11 17:35:26.609: INFO: Pod pod-subpath-test-configmap-656q no longer exists STEP: Deleting pod pod-subpath-test-configmap-656q May 11 17:35:26.609: INFO: Deleting pod "pod-subpath-test-configmap-656q" in namespace "subpath-9865" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:35:26.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9865" for this suite. May 11 17:35:32.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:35:32.754: INFO: namespace subpath-9865 deletion completed in 6.140920231s • [SLOW TEST:33.147 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:35:32.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-9d13236f-0cf2-407c-b609-26c485206b52 STEP: Creating a pod to test consume configMaps May 11 17:35:32.965: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e721c0cd-3944-46b2-bfda-f555f89f037d" in namespace "projected-7431" to be "success or failure" May 11 17:35:32.985: INFO: Pod "pod-projected-configmaps-e721c0cd-3944-46b2-bfda-f555f89f037d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.54255ms May 11 17:35:35.030: INFO: Pod "pod-projected-configmaps-e721c0cd-3944-46b2-bfda-f555f89f037d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064295185s May 11 17:35:37.084: INFO: Pod "pod-projected-configmaps-e721c0cd-3944-46b2-bfda-f555f89f037d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118369926s May 11 17:35:39.087: INFO: Pod "pod-projected-configmaps-e721c0cd-3944-46b2-bfda-f555f89f037d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121672879s STEP: Saw pod success May 11 17:35:39.087: INFO: Pod "pod-projected-configmaps-e721c0cd-3944-46b2-bfda-f555f89f037d" satisfied condition "success or failure" May 11 17:35:39.090: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-e721c0cd-3944-46b2-bfda-f555f89f037d container projected-configmap-volume-test: STEP: delete the pod May 11 17:35:39.111: INFO: Waiting for pod pod-projected-configmaps-e721c0cd-3944-46b2-bfda-f555f89f037d to disappear May 11 17:35:39.388: INFO: Pod pod-projected-configmaps-e721c0cd-3944-46b2-bfda-f555f89f037d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:35:39.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7431" for this suite. May 11 17:35:47.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:35:47.992: INFO: namespace projected-7431 deletion completed in 8.600285169s • [SLOW TEST:15.238 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:35:47.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 17:35:49.044: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 11 17:35:49.093: INFO: Number of nodes with available pods: 0 May 11 17:35:49.093: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 11 17:35:49.903: INFO: Number of nodes with available pods: 0 May 11 17:35:49.903: INFO: Node iruya-worker is running more than one daemon pod May 11 17:35:50.908: INFO: Number of nodes with available pods: 0 May 11 17:35:50.908: INFO: Node iruya-worker is running more than one daemon pod May 11 17:35:52.208: INFO: Number of nodes with available pods: 0 May 11 17:35:52.208: INFO: Node iruya-worker is running more than one daemon pod May 11 17:35:52.916: INFO: Number of nodes with available pods: 0 May 11 17:35:52.916: INFO: Node iruya-worker is running more than one daemon pod May 11 17:35:53.907: INFO: Number of nodes with available pods: 0 May 11 17:35:53.907: INFO: Node iruya-worker is running more than one daemon pod May 11 17:35:54.907: INFO: Number of nodes with available pods: 0 May 11 17:35:54.907: INFO: Node iruya-worker is running more than one daemon pod May 11 17:35:55.908: INFO: Number of nodes with available pods: 1 May 11 17:35:55.908: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 11 17:35:56.017: INFO: Number of nodes with available pods: 1 May 11 17:35:56.017: INFO: Number of running nodes: 0, number of available pods: 1 May 11 17:35:57.021: INFO: Number of nodes with available pods: 0 May 11 17:35:57.021: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 11 17:35:57.077: INFO: Number of nodes with available pods: 0 May 11 17:35:57.077: INFO: Node iruya-worker is running more than one daemon pod May 11 17:35:58.080: INFO: Number of nodes with available pods: 0 May 11 17:35:58.080: INFO: Node iruya-worker is running more than one daemon pod May 11 17:35:59.162: INFO: Number of nodes with available pods: 0 May 11 17:35:59.162: INFO: Node iruya-worker is running more than one daemon pod May 11 17:36:00.080: INFO: Number of nodes with available pods: 0 May 11 17:36:00.080: INFO: Node iruya-worker is running more than one daemon pod May 11 17:36:01.080: INFO: Number of nodes with available pods: 0 May 11 17:36:01.080: INFO: Node iruya-worker is running more than one daemon pod May 11 17:36:02.081: INFO: Number of nodes with available pods: 0 May 11 17:36:02.081: INFO: Node iruya-worker is running more than one daemon pod May 11 17:36:03.179: INFO: Number of nodes with available pods: 0 May 11 17:36:03.179: INFO: Node iruya-worker is running more than one daemon pod May 11 17:36:04.081: INFO: Number of nodes with available pods: 0 May 11 17:36:04.081: INFO: Node iruya-worker is running more than one daemon pod May 11 17:36:05.081: INFO: Number of nodes with available pods: 0 May 11 17:36:05.081: INFO: Node iruya-worker is running more than one daemon pod May 11 17:36:06.090: INFO: Number of nodes with available pods: 1 May 11 17:36:06.090: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9162, will wait for the garbage collector to delete the pods May 11 17:36:06.153: INFO: Deleting DaemonSet.extensions daemon-set took: 6.892195ms May 11 17:36:06.454: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.240481ms May 11 17:36:22.256: INFO: Number of nodes with available pods: 0 May 11 17:36:22.257: INFO: Number of running nodes: 0, number of available pods: 0 May 11 17:36:22.261: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9162/daemonsets","resourceVersion":"10289614"},"items":null} May 11 17:36:22.263: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9162/pods","resourceVersion":"10289614"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:36:22.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9162" for this suite. May 11 17:36:28.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:36:28.478: INFO: namespace daemonsets-9162 deletion completed in 6.135070709s • [SLOW TEST:40.486 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:36:28.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 17:36:29.543: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e" in namespace "projected-1715" to be "success or failure" May 11 17:36:29.886: INFO: Pod "downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e": Phase="Pending", Reason="", readiness=false. Elapsed: 343.4412ms May 11 17:36:31.890: INFO: Pod "downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347381193s May 11 17:36:33.895: INFO: Pod "downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35188421s May 11 17:36:37.053: INFO: Pod "downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.509916816s May 11 17:36:39.057: INFO: Pod "downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.51385649s May 11 17:36:41.060: INFO: Pod "downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.517369826s May 11 17:36:43.078: INFO: Pod "downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.535057058s STEP: Saw pod success May 11 17:36:43.078: INFO: Pod "downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e" satisfied condition "success or failure" May 11 17:36:43.136: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e container client-container: STEP: delete the pod May 11 17:36:44.126: INFO: Waiting for pod downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e to disappear May 11 17:36:44.166: INFO: Pod downwardapi-volume-45f2fb88-7f94-406d-b1ae-e5e22c33e45e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:36:44.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1715" for this suite. May 11 17:36:54.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:36:54.270: INFO: namespace projected-1715 deletion completed in 10.092217006s • [SLOW TEST:25.792 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:36:54.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 11 17:36:54.566: INFO: Waiting up to 5m0s for pod "pod-7167ea0e-56e3-4175-a12e-359939a0a77f" in namespace "emptydir-619" to be "success or failure" May 11 17:36:54.586: INFO: Pod "pod-7167ea0e-56e3-4175-a12e-359939a0a77f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.919565ms May 11 17:36:56.590: INFO: Pod "pod-7167ea0e-56e3-4175-a12e-359939a0a77f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024492935s May 11 17:36:58.886: INFO: Pod "pod-7167ea0e-56e3-4175-a12e-359939a0a77f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319849159s May 11 17:37:00.889: INFO: Pod "pod-7167ea0e-56e3-4175-a12e-359939a0a77f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.323159244s STEP: Saw pod success May 11 17:37:00.889: INFO: Pod "pod-7167ea0e-56e3-4175-a12e-359939a0a77f" satisfied condition "success or failure" May 11 17:37:00.891: INFO: Trying to get logs from node iruya-worker2 pod pod-7167ea0e-56e3-4175-a12e-359939a0a77f container test-container: STEP: delete the pod May 11 17:37:00.928: INFO: Waiting for pod pod-7167ea0e-56e3-4175-a12e-359939a0a77f to disappear May 11 17:37:00.987: INFO: Pod pod-7167ea0e-56e3-4175-a12e-359939a0a77f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:37:00.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-619" for this suite. May 11 17:37:09.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:37:09.081: INFO: namespace emptydir-619 deletion completed in 8.09079852s • [SLOW TEST:14.811 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:37:09.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 17:37:09.596: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5cfad208-938b-4feb-9086-dfbd8581cf38" in namespace "projected-187" to be "success or failure" May 11 17:37:09.610: INFO: Pod "downwardapi-volume-5cfad208-938b-4feb-9086-dfbd8581cf38": Phase="Pending", Reason="", readiness=false. Elapsed: 14.815131ms May 11 17:37:11.773: INFO: Pod "downwardapi-volume-5cfad208-938b-4feb-9086-dfbd8581cf38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177679871s May 11 17:37:13.778: INFO: Pod "downwardapi-volume-5cfad208-938b-4feb-9086-dfbd8581cf38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18237843s May 11 17:37:15.802: INFO: Pod "downwardapi-volume-5cfad208-938b-4feb-9086-dfbd8581cf38": Phase="Running", Reason="", readiness=true. Elapsed: 6.205910896s May 11 17:37:17.805: INFO: Pod "downwardapi-volume-5cfad208-938b-4feb-9086-dfbd8581cf38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.208945046s STEP: Saw pod success May 11 17:37:17.805: INFO: Pod "downwardapi-volume-5cfad208-938b-4feb-9086-dfbd8581cf38" satisfied condition "success or failure" May 11 17:37:17.806: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5cfad208-938b-4feb-9086-dfbd8581cf38 container client-container: STEP: delete the pod May 11 17:37:17.930: INFO: Waiting for pod downwardapi-volume-5cfad208-938b-4feb-9086-dfbd8581cf38 to disappear May 11 17:37:17.980: INFO: Pod downwardapi-volume-5cfad208-938b-4feb-9086-dfbd8581cf38 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:37:17.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-187" for this suite. May 11 17:37:24.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:37:24.347: INFO: namespace projected-187 deletion completed in 6.363804382s • [SLOW TEST:15.265 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:37:24.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 17:37:24.588: INFO: Waiting up to 5m0s for pod "downwardapi-volume-848a6206-4c1a-4f60-89c8-09a10fdb3616" in namespace "downward-api-2923" to be "success or failure" May 11 17:37:24.780: INFO: Pod "downwardapi-volume-848a6206-4c1a-4f60-89c8-09a10fdb3616": Phase="Pending", Reason="", readiness=false. Elapsed: 191.63234ms May 11 17:37:26.784: INFO: Pod "downwardapi-volume-848a6206-4c1a-4f60-89c8-09a10fdb3616": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195531126s May 11 17:37:29.127: INFO: Pod "downwardapi-volume-848a6206-4c1a-4f60-89c8-09a10fdb3616": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538589589s May 11 17:37:31.129: INFO: Pod "downwardapi-volume-848a6206-4c1a-4f60-89c8-09a10fdb3616": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.541352341s STEP: Saw pod success May 11 17:37:31.129: INFO: Pod "downwardapi-volume-848a6206-4c1a-4f60-89c8-09a10fdb3616" satisfied condition "success or failure" May 11 17:37:31.131: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-848a6206-4c1a-4f60-89c8-09a10fdb3616 container client-container: STEP: delete the pod May 11 17:37:31.344: INFO: Waiting for pod downwardapi-volume-848a6206-4c1a-4f60-89c8-09a10fdb3616 to disappear May 11 17:37:31.555: INFO: Pod downwardapi-volume-848a6206-4c1a-4f60-89c8-09a10fdb3616 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:37:31.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2923" for this suite. May 11 17:37:37.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:37:38.022: INFO: namespace downward-api-2923 deletion completed in 6.463624693s • [SLOW TEST:13.675 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:37:38.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 11 17:37:42.098: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-7f4cd620-67d8-40e7-b6e8-587b373b67ca,GenerateName:,Namespace:events-3395,SelfLink:/api/v1/namespaces/events-3395/pods/send-events-7f4cd620-67d8-40e7-b6e8-587b373b67ca,UID:1fe2f8a9-e64d-4b5b-b13d-ab8410b1d582,ResourceVersion:10289886,Generation:0,CreationTimestamp:2020-05-11 17:37:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 66822873,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qdqtp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qdqtp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-qdqtp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d15fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d15fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:37:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:37:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:37:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:37:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.87,StartTime:2020-05-11 17:37:38 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-11 17:37:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://216400863daa1c15215cdc6fc157142cc79d7270bacb3a66241af402e6455a42}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 11 17:37:44.102: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 11 17:37:46.151: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:37:46.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3395" for this suite. May 11 17:38:24.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:38:24.397: INFO: namespace events-3395 deletion completed in 38.209975259s • [SLOW TEST:46.374 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:38:24.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-334f4600-256a-41d3-b2a1-3cb51c64c8a9 in namespace container-probe-4610 May 11 17:38:30.511: INFO: Started pod busybox-334f4600-256a-41d3-b2a1-3cb51c64c8a9 in namespace container-probe-4610 STEP: checking the pod's current state and verifying that restartCount is present May 11 17:38:30.514: INFO: Initial restart count of pod busybox-334f4600-256a-41d3-b2a1-3cb51c64c8a9 is 0 May 11 17:39:23.578: INFO: Restart count of pod container-probe-4610/busybox-334f4600-256a-41d3-b2a1-3cb51c64c8a9 is now 1 (53.064085502s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:39:23.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4610" for this suite. May 11 17:39:31.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:39:31.797: INFO: namespace container-probe-4610 deletion completed in 8.105977111s • [SLOW TEST:67.400 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:39:31.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:40:31.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4042" for this suite. May 11 17:40:57.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:40:58.027: INFO: namespace container-probe-4042 deletion completed in 26.075071332s • [SLOW TEST:86.229 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:40:58.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 17:40:58.267: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:40:58.269: INFO: Number of nodes with available pods: 0 May 11 17:40:58.269: INFO: Node iruya-worker is running more than one daemon pod May 11 17:40:59.358: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:40:59.427: INFO: Number of nodes with available pods: 0 May 11 17:40:59.427: INFO: Node iruya-worker is running more than one daemon pod May 11 17:41:00.513: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:41:00.516: INFO: Number of nodes with available pods: 0 May 11 17:41:00.516: INFO: Node iruya-worker is running more than one daemon pod May 11 17:41:01.458: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:41:01.462: INFO: Number of nodes with available pods: 0 May 11 17:41:01.462: INFO: Node iruya-worker is running more than one daemon pod May 11 17:41:02.702: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:41:02.706: INFO: Number of nodes with available pods: 0 May 11 17:41:02.706: INFO: Node iruya-worker is running more than one daemon pod May 11 17:41:04.005: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:41:04.201: INFO: Number of nodes with available pods: 0 May 11 17:41:04.201: INFO: Node iruya-worker is running more than one daemon pod May 11 17:41:04.273: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:41:04.275: INFO: Number of nodes with available pods: 0 May 11 17:41:04.275: INFO: Node iruya-worker is running more than one daemon pod May 11 17:41:05.292: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:41:05.668: INFO: Number of nodes with available pods: 0 May 11 17:41:05.668: INFO: Node iruya-worker is running more than one daemon pod May 11 17:41:06.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:41:06.468: INFO: Number of nodes with available pods: 0 May 11 17:41:06.468: INFO: Node iruya-worker is running more than one daemon pod May 11 17:41:07.273: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:41:07.276: INFO: Number of nodes with available pods: 0 May 11 17:41:07.276: INFO: Node iruya-worker is running more than one daemon pod May 11 17:41:08.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:41:08.543: INFO: Number of nodes with available pods: 2 May 11 17:41:08.543: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 11 17:41:08.903: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:41:09.057: INFO: Number of nodes with available pods: 2 May 11 17:41:09.057: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-547, will wait for the garbage collector to delete the pods May 11 17:41:11.782: INFO: Deleting DaemonSet.extensions daemon-set took: 990.209109ms May 11 17:41:12.382: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.27036ms May 11 17:41:22.585: INFO: Number of nodes with available pods: 0 May 11 17:41:22.585: INFO: Number of running nodes: 0, number of available pods: 0 May 11 17:41:22.588: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-547/daemonsets","resourceVersion":"10290430"},"items":null} May 11 17:41:22.632: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-547/pods","resourceVersion":"10290430"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:41:22.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-547" for this suite. May 11 17:41:32.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:41:32.792: INFO: namespace daemonsets-547 deletion completed in 10.148469247s • [SLOW TEST:34.765 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:41:32.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 11 17:41:41.267: INFO: Successfully updated pod "annotationupdateb9aaa58b-f1dc-4e02-9fc6-2416d3ac0763" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:41:43.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9331" for this suite. May 11 17:42:07.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:42:07.818: INFO: namespace downward-api-9331 deletion completed in 24.091641873s • [SLOW TEST:35.026 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:42:07.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 17:42:08.165: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:42:16.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-809" for this suite. May 11 17:42:58.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:42:58.936: INFO: namespace pods-809 deletion completed in 42.254651297s • [SLOW TEST:51.117 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:42:58.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 11 17:43:12.035: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1295 pod-service-account-801ead13-1f86-4464-ac8c-a024fcad687a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 11 17:43:27.492: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1295 pod-service-account-801ead13-1f86-4464-ac8c-a024fcad687a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 11 17:43:27.755: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1295 pod-service-account-801ead13-1f86-4464-ac8c-a024fcad687a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:43:27.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1295" for this suite. May 11 17:43:38.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:43:38.665: INFO: namespace svcaccounts-1295 deletion completed in 10.508595678s • [SLOW TEST:39.729 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:43:38.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-gf62 STEP: Creating a pod to test atomic-volume-subpath May 11 17:43:41.283: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gf62" in namespace "subpath-5577" to be "success or failure" May 11 17:43:41.459: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Pending", Reason="", readiness=false. Elapsed: 176.402819ms May 11 17:43:43.462: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17964937s May 11 17:43:45.465: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18239367s May 11 17:43:47.526: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Running", Reason="", readiness=true. Elapsed: 6.243314161s May 11 17:43:49.529: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Running", Reason="", readiness=true. Elapsed: 8.24655048s May 11 17:43:51.562: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Running", Reason="", readiness=true. Elapsed: 10.27953961s May 11 17:43:53.566: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Running", Reason="", readiness=true. Elapsed: 12.282991208s May 11 17:43:55.861: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Running", Reason="", readiness=true. Elapsed: 14.57862924s May 11 17:43:57.970: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Running", Reason="", readiness=true. Elapsed: 16.686670623s May 11 17:44:00.004: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Running", Reason="", readiness=true. Elapsed: 18.721355027s May 11 17:44:02.008: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Running", Reason="", readiness=true. Elapsed: 20.724845065s May 11 17:44:04.011: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Running", Reason="", readiness=true. Elapsed: 22.728416122s May 11 17:44:06.014: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Running", Reason="", readiness=true. Elapsed: 24.731557385s May 11 17:44:08.017: INFO: Pod "pod-subpath-test-secret-gf62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.73456832s STEP: Saw pod success May 11 17:44:08.017: INFO: Pod "pod-subpath-test-secret-gf62" satisfied condition "success or failure" May 11 17:44:08.019: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-gf62 container test-container-subpath-secret-gf62: STEP: delete the pod May 11 17:44:08.336: INFO: Waiting for pod pod-subpath-test-secret-gf62 to disappear May 11 17:44:08.495: INFO: Pod pod-subpath-test-secret-gf62 no longer exists STEP: Deleting pod pod-subpath-test-secret-gf62 May 11 17:44:08.495: INFO: Deleting pod "pod-subpath-test-secret-gf62" in namespace "subpath-5577" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:44:08.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5577" for this suite. May 11 17:44:18.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:44:18.825: INFO: namespace subpath-5577 deletion completed in 10.323231742s • [SLOW TEST:40.160 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:44:18.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-d869f5cd-8953-4e7e-8920-5821ec56a5c9 STEP: Creating a pod to test consume configMaps May 11 17:44:19.625: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7333632b-fb52-46d2-a2c2-5f72545c9b33" in namespace "projected-2736" to be "success or failure" May 11 17:44:19.666: INFO: Pod "pod-projected-configmaps-7333632b-fb52-46d2-a2c2-5f72545c9b33": Phase="Pending", Reason="", readiness=false. Elapsed: 40.852774ms May 11 17:44:21.970: INFO: Pod "pod-projected-configmaps-7333632b-fb52-46d2-a2c2-5f72545c9b33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.345200024s May 11 17:44:23.974: INFO: Pod "pod-projected-configmaps-7333632b-fb52-46d2-a2c2-5f72545c9b33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.348986194s May 11 17:44:25.982: INFO: Pod "pod-projected-configmaps-7333632b-fb52-46d2-a2c2-5f72545c9b33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356638598s May 11 17:44:28.623: INFO: Pod "pod-projected-configmaps-7333632b-fb52-46d2-a2c2-5f72545c9b33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.997802053s STEP: Saw pod success May 11 17:44:28.623: INFO: Pod "pod-projected-configmaps-7333632b-fb52-46d2-a2c2-5f72545c9b33" satisfied condition "success or failure" May 11 17:44:28.626: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-7333632b-fb52-46d2-a2c2-5f72545c9b33 container projected-configmap-volume-test: STEP: delete the pod May 11 17:44:29.136: INFO: Waiting for pod pod-projected-configmaps-7333632b-fb52-46d2-a2c2-5f72545c9b33 to disappear May 11 17:44:29.313: INFO: Pod pod-projected-configmaps-7333632b-fb52-46d2-a2c2-5f72545c9b33 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:44:29.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2736" for this suite. May 11 17:44:35.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:44:35.870: INFO: namespace projected-2736 deletion completed in 6.553318785s • [SLOW TEST:17.045 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:44:35.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 17:44:36.403: INFO: Creating deployment "test-recreate-deployment" May 11 17:44:36.423: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 11 17:44:36.768: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 11 17:44:39.029: INFO: Waiting deployment "test-recreate-deployment" to complete May 11 17:44:39.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:44:41.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:44:43.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815876, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:44:45.527: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 11 17:44:45.535: INFO: Updating deployment test-recreate-deployment May 11 17:44:45.535: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 11 17:44:46.638: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-175,SelfLink:/apis/apps/v1/namespaces/deployment-175/deployments/test-recreate-deployment,UID:9be9a62a-019d-4e63-b751-43f334c6093c,ResourceVersion:10291009,Generation:2,CreationTimestamp:2020-05-11 17:44:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-11 17:44:45 +0000 UTC 2020-05-11 17:44:45 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-11 17:44:46 +0000 UTC 2020-05-11 17:44:36 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 11 17:44:46.641: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-175,SelfLink:/apis/apps/v1/namespaces/deployment-175/replicasets/test-recreate-deployment-5c8c9cc69d,UID:1e408a0f-5dc5-4e52-80cd-a1cf94896116,ResourceVersion:10291007,Generation:1,CreationTimestamp:2020-05-11 17:44:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9be9a62a-019d-4e63-b751-43f334c6093c 0xc00306cbc7 0xc00306cbc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 17:44:46.641: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 11 17:44:46.642: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-175,SelfLink:/apis/apps/v1/namespaces/deployment-175/replicasets/test-recreate-deployment-6df85df6b9,UID:aa6ec49e-b2e9-4c56-96b3-420b7082cbbb,ResourceVersion:10290997,Generation:2,CreationTimestamp:2020-05-11 17:44:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9be9a62a-019d-4e63-b751-43f334c6093c 0xc00306cc97 0xc00306cc98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 17:44:46.645: INFO: Pod "test-recreate-deployment-5c8c9cc69d-bpq9s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-bpq9s,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-175,SelfLink:/api/v1/namespaces/deployment-175/pods/test-recreate-deployment-5c8c9cc69d-bpq9s,UID:1683fab0-dce3-46a3-b906-e76f263d12e6,ResourceVersion:10291010,Generation:0,CreationTimestamp:2020-05-11 17:44:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 1e408a0f-5dc5-4e52-80cd-a1cf94896116 0xc002d14f47 0xc002d14f48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vqq6b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vqq6b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vqq6b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d14fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d14fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:44:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:44:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:44:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:44:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-11 17:44:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:44:46.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-175" for this suite. May 11 17:44:54.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:44:54.750: INFO: namespace deployment-175 deletion completed in 8.101841324s • [SLOW TEST:18.879 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:44:54.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 11 17:44:55.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 11 17:44:55.121: INFO: stderr: "" May 11 17:44:55.121: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:44:55.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8760" for this suite. May 11 17:45:01.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:45:01.274: INFO: namespace kubectl-8760 deletion completed in 6.149270403s • [SLOW TEST:6.524 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:45:01.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 11 17:45:01.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9485' May 11 17:45:02.442: INFO: stderr: "" May 11 17:45:02.442: INFO: stdout: "pod/pause created\n" May 11 17:45:02.442: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 11 17:45:02.442: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9485" to be "running and ready" May 11 17:45:02.635: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 193.103461ms May 11 17:45:04.863: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.421057604s May 11 17:45:06.994: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.552541813s May 11 17:45:08.998: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.555999233s May 11 17:45:08.998: INFO: Pod "pause" satisfied condition "running and ready" May 11 17:45:08.998: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 11 17:45:08.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9485' May 11 17:45:09.650: INFO: stderr: "" May 11 17:45:09.650: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 11 17:45:09.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9485' May 11 17:45:09.809: INFO: stderr: "" May 11 17:45:09.809: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod May 11 17:45:09.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9485' May 11 17:45:10.109: INFO: stderr: "" May 11 17:45:10.109: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 11 17:45:10.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9485' May 11 17:45:10.444: INFO: stderr: "" May 11 17:45:10.444: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 11 17:45:10.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9485' May 11 17:45:10.848: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 17:45:10.848: INFO: stdout: "pod \"pause\" force deleted\n" May 11 17:45:10.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9485' May 11 17:45:10.938: INFO: stderr: "No resources found.\n" May 11 17:45:10.938: INFO: stdout: "" May 11 17:45:10.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9485 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 17:45:11.089: INFO: stderr: "" May 11 17:45:11.089: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:45:11.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9485" for this suite. May 11 17:45:17.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:45:17.995: INFO: namespace kubectl-9485 deletion completed in 6.514760405s • [SLOW TEST:16.721 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:45:17.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 11 17:45:24.671: INFO: Successfully updated pod "labelsupdate80be095f-2b3d-4268-b0ad-d208bd963b36" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:45:28.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1495" for this suite. May 11 17:45:50.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:45:50.843: INFO: namespace projected-1495 deletion completed in 22.141778606s • [SLOW TEST:32.848 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:45:50.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 11 17:45:50.990: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5888,SelfLink:/api/v1/namespaces/watch-5888/configmaps/e2e-watch-test-watch-closed,UID:43cd9117-3c63-4c71-a22b-c9f6107b0e47,ResourceVersion:10291224,Generation:0,CreationTimestamp:2020-05-11 17:45:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 17:45:50.990: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5888,SelfLink:/api/v1/namespaces/watch-5888/configmaps/e2e-watch-test-watch-closed,UID:43cd9117-3c63-4c71-a22b-c9f6107b0e47,ResourceVersion:10291225,Generation:0,CreationTimestamp:2020-05-11 17:45:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 11 17:45:51.124: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5888,SelfLink:/api/v1/namespaces/watch-5888/configmaps/e2e-watch-test-watch-closed,UID:43cd9117-3c63-4c71-a22b-c9f6107b0e47,ResourceVersion:10291226,Generation:0,CreationTimestamp:2020-05-11 17:45:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 17:45:51.124: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5888,SelfLink:/api/v1/namespaces/watch-5888/configmaps/e2e-watch-test-watch-closed,UID:43cd9117-3c63-4c71-a22b-c9f6107b0e47,ResourceVersion:10291227,Generation:0,CreationTimestamp:2020-05-11 17:45:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:45:51.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5888" for this suite. May 11 17:45:57.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:45:57.249: INFO: namespace watch-5888 deletion completed in 6.111566446s • [SLOW TEST:6.405 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:45:57.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 17:46:04.169: INFO: Waiting up to 5m0s for pod "client-envvars-86142d4f-f6c6-4000-9ecc-7b3c1537eac9" in namespace "pods-2837" to be "success or failure" May 11 17:46:04.192: INFO: Pod "client-envvars-86142d4f-f6c6-4000-9ecc-7b3c1537eac9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.426047ms May 11 17:46:06.195: INFO: Pod "client-envvars-86142d4f-f6c6-4000-9ecc-7b3c1537eac9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025749605s May 11 17:46:08.199: INFO: Pod "client-envvars-86142d4f-f6c6-4000-9ecc-7b3c1537eac9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029305194s May 11 17:46:10.210: INFO: Pod "client-envvars-86142d4f-f6c6-4000-9ecc-7b3c1537eac9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041095761s STEP: Saw pod success May 11 17:46:10.211: INFO: Pod "client-envvars-86142d4f-f6c6-4000-9ecc-7b3c1537eac9" satisfied condition "success or failure" May 11 17:46:10.221: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-86142d4f-f6c6-4000-9ecc-7b3c1537eac9 container env3cont: STEP: delete the pod May 11 17:46:10.409: INFO: Waiting for pod client-envvars-86142d4f-f6c6-4000-9ecc-7b3c1537eac9 to disappear May 11 17:46:10.588: INFO: Pod client-envvars-86142d4f-f6c6-4000-9ecc-7b3c1537eac9 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:46:10.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2837" for this suite. May 11 17:46:52.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:46:52.936: INFO: namespace pods-2837 deletion completed in 42.344912781s • [SLOW TEST:55.687 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:46:52.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:46:53.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1777" for this suite. May 11 17:47:01.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:47:01.883: INFO: namespace kubelet-test-1777 deletion completed in 8.228377362s • [SLOW TEST:8.947 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:47:01.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 11 17:47:27.065: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2776 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:47:27.065: INFO: >>> kubeConfig: /root/.kube/config I0511 17:47:27.090251 7 log.go:172] (0xc0008c4420) (0xc002128820) Create stream I0511 17:47:27.090279 7 log.go:172] (0xc0008c4420) (0xc002128820) Stream added, broadcasting: 1 I0511 17:47:27.092240 7 log.go:172] (0xc0008c4420) Reply frame received for 1 I0511 17:47:27.092273 7 log.go:172] (0xc0008c4420) (0xc0017da000) Create stream I0511 17:47:27.092286 7 log.go:172] (0xc0008c4420) (0xc0017da000) Stream added, broadcasting: 3 I0511 17:47:27.093389 7 log.go:172] (0xc0008c4420) Reply frame received for 3 I0511 17:47:27.093449 7 log.go:172] (0xc0008c4420) (0xc0020208c0) Create stream I0511 17:47:27.093463 7 log.go:172] (0xc0008c4420) (0xc0020208c0) Stream added, broadcasting: 5 I0511 17:47:27.094499 7 log.go:172] (0xc0008c4420) Reply frame received for 5 I0511 17:47:27.169530 7 log.go:172] (0xc0008c4420) Data frame received for 3 I0511 17:47:27.169560 7 log.go:172] (0xc0017da000) (3) Data frame handling I0511 17:47:27.169579 7 log.go:172] (0xc0017da000) (3) Data frame sent I0511 17:47:27.169741 7 log.go:172] (0xc0008c4420) Data frame received for 5 I0511 17:47:27.169769 7 log.go:172] (0xc0020208c0) (5) Data frame handling I0511 17:47:27.169794 7 log.go:172] (0xc0008c4420) Data frame received for 3 I0511 17:47:27.169804 7 log.go:172] (0xc0017da000) (3) Data frame handling I0511 17:47:27.171019 7 log.go:172] (0xc0008c4420) Data frame received for 1 I0511 17:47:27.171037 7 log.go:172] (0xc002128820) (1) Data frame handling I0511 17:47:27.171047 7 log.go:172] (0xc002128820) (1) Data frame sent I0511 17:47:27.171059 7 log.go:172] (0xc0008c4420) (0xc002128820) Stream removed, broadcasting: 1 I0511 17:47:27.171081 7 log.go:172] (0xc0008c4420) Go away received I0511 17:47:27.171169 7 log.go:172] (0xc0008c4420) (0xc002128820) Stream removed, broadcasting: 1 I0511 17:47:27.171200 7 log.go:172] (0xc0008c4420) (0xc0017da000) Stream removed, broadcasting: 3 I0511 17:47:27.171228 7 log.go:172] (0xc0008c4420) (0xc0020208c0) Stream removed, broadcasting: 5 May 11 17:47:27.171: INFO: Exec stderr: "" May 11 17:47:27.171: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2776 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:47:27.171: INFO: >>> kubeConfig: /root/.kube/config I0511 17:47:27.206831 7 log.go:172] (0xc0009bbef0) (0xc0017da3c0) Create stream I0511 17:47:27.206866 7 log.go:172] (0xc0009bbef0) (0xc0017da3c0) Stream added, broadcasting: 1 I0511 17:47:27.208989 7 log.go:172] (0xc0009bbef0) Reply frame received for 1 I0511 17:47:27.209050 7 log.go:172] (0xc0009bbef0) (0xc0021288c0) Create stream I0511 17:47:27.209070 7 log.go:172] (0xc0009bbef0) (0xc0021288c0) Stream added, broadcasting: 3 I0511 17:47:27.210254 7 log.go:172] (0xc0009bbef0) Reply frame received for 3 I0511 17:47:27.210288 7 log.go:172] (0xc0009bbef0) (0xc0002f4000) Create stream I0511 17:47:27.210296 7 log.go:172] (0xc0009bbef0) (0xc0002f4000) Stream added, broadcasting: 5 I0511 17:47:27.211294 7 log.go:172] (0xc0009bbef0) Reply frame received for 5 I0511 17:47:27.273344 7 log.go:172] (0xc0009bbef0) Data frame received for 3 I0511 17:47:27.273418 7 log.go:172] (0xc0021288c0) (3) Data frame handling I0511 17:47:27.273440 7 log.go:172] (0xc0021288c0) (3) Data frame sent I0511 17:47:27.273450 7 log.go:172] (0xc0009bbef0) Data frame received for 3 I0511 17:47:27.273463 7 log.go:172] (0xc0021288c0) (3) Data frame handling I0511 17:47:27.273498 7 log.go:172] (0xc0009bbef0) Data frame received for 5 I0511 17:47:27.273513 7 log.go:172] (0xc0002f4000) (5) Data frame handling I0511 17:47:27.274927 7 log.go:172] (0xc0009bbef0) Data frame received for 1 I0511 17:47:27.274947 7 log.go:172] (0xc0017da3c0) (1) Data frame handling I0511 17:47:27.274960 7 log.go:172] (0xc0017da3c0) (1) Data frame sent I0511 17:47:27.274982 7 log.go:172] (0xc0009bbef0) (0xc0017da3c0) Stream removed, broadcasting: 1 I0511 17:47:27.275011 7 log.go:172] (0xc0009bbef0) Go away received I0511 17:47:27.275123 7 log.go:172] (0xc0009bbef0) (0xc0017da3c0) Stream removed, broadcasting: 1 I0511 17:47:27.275149 7 log.go:172] (0xc0009bbef0) (0xc0021288c0) Stream removed, broadcasting: 3 I0511 17:47:27.275163 7 log.go:172] (0xc0009bbef0) (0xc0002f4000) Stream removed, broadcasting: 5 May 11 17:47:27.275: INFO: Exec stderr: "" May 11 17:47:27.275: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2776 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:47:27.275: INFO: >>> kubeConfig: /root/.kube/config I0511 17:47:27.301921 7 log.go:172] (0xc000e90630) (0xc0002f48c0) Create stream I0511 17:47:27.301956 7 log.go:172] (0xc000e90630) (0xc0002f48c0) Stream added, broadcasting: 1 I0511 17:47:27.304022 7 log.go:172] (0xc000e90630) Reply frame received for 1 I0511 17:47:27.304066 7 log.go:172] (0xc000e90630) (0xc002128a00) Create stream I0511 17:47:27.304076 7 log.go:172] (0xc000e90630) (0xc002128a00) Stream added, broadcasting: 3 I0511 17:47:27.304982 7 log.go:172] (0xc000e90630) Reply frame received for 3 I0511 17:47:27.305022 7 log.go:172] (0xc000e90630) (0xc002128aa0) Create stream I0511 17:47:27.305037 7 log.go:172] (0xc000e90630) (0xc002128aa0) Stream added, broadcasting: 5 I0511 17:47:27.306135 7 log.go:172] (0xc000e90630) Reply frame received for 5 I0511 17:47:27.427471 7 log.go:172] (0xc000e90630) Data frame received for 5 I0511 17:47:27.427500 7 log.go:172] (0xc002128aa0) (5) Data frame handling I0511 17:47:27.427520 7 log.go:172] (0xc000e90630) Data frame received for 3 I0511 17:47:27.427528 7 log.go:172] (0xc002128a00) (3) Data frame handling I0511 17:47:27.427536 7 log.go:172] (0xc002128a00) (3) Data frame sent I0511 17:47:27.427544 7 log.go:172] (0xc000e90630) Data frame received for 3 I0511 17:47:27.427549 7 log.go:172] (0xc002128a00) (3) Data frame handling I0511 17:47:27.428351 7 log.go:172] (0xc000e90630) Data frame received for 1 I0511 17:47:27.428366 7 log.go:172] (0xc0002f48c0) (1) Data frame handling I0511 17:47:27.428378 7 log.go:172] (0xc0002f48c0) (1) Data frame sent I0511 17:47:27.428396 7 log.go:172] (0xc000e90630) (0xc0002f48c0) Stream removed, broadcasting: 1 I0511 17:47:27.428415 7 log.go:172] (0xc000e90630) Go away received I0511 17:47:27.428530 7 log.go:172] (0xc000e90630) (0xc0002f48c0) Stream removed, broadcasting: 1 I0511 17:47:27.428549 7 log.go:172] (0xc000e90630) (0xc002128a00) Stream removed, broadcasting: 3 I0511 17:47:27.428555 7 log.go:172] (0xc000e90630) (0xc002128aa0) Stream removed, broadcasting: 5 May 11 17:47:27.428: INFO: Exec stderr: "" May 11 17:47:27.428: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2776 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:47:27.428: INFO: >>> kubeConfig: /root/.kube/config I0511 17:47:27.460599 7 log.go:172] (0xc000e91340) (0xc0002f52c0) Create stream I0511 17:47:27.460635 7 log.go:172] (0xc000e91340) (0xc0002f52c0) Stream added, broadcasting: 1 I0511 17:47:27.463824 7 log.go:172] (0xc000e91340) Reply frame received for 1 I0511 17:47:27.463859 7 log.go:172] (0xc000e91340) (0xc000210640) Create stream I0511 17:47:27.463875 7 log.go:172] (0xc000e91340) (0xc000210640) Stream added, broadcasting: 3 I0511 17:47:27.464580 7 log.go:172] (0xc000e91340) Reply frame received for 3 I0511 17:47:27.464601 7 log.go:172] (0xc000e91340) (0xc002020960) Create stream I0511 17:47:27.464609 7 log.go:172] (0xc000e91340) (0xc002020960) Stream added, broadcasting: 5 I0511 17:47:27.465580 7 log.go:172] (0xc000e91340) Reply frame received for 5 I0511 17:47:27.515749 7 log.go:172] (0xc000e91340) Data frame received for 3 I0511 17:47:27.515801 7 log.go:172] (0xc000210640) (3) Data frame handling I0511 17:47:27.515815 7 log.go:172] (0xc000210640) (3) Data frame sent I0511 17:47:27.515822 7 log.go:172] (0xc000e91340) Data frame received for 3 I0511 17:47:27.515830 7 log.go:172] (0xc000210640) (3) Data frame handling I0511 17:47:27.515848 7 log.go:172] (0xc000e91340) Data frame received for 5 I0511 17:47:27.515869 7 log.go:172] (0xc002020960) (5) Data frame handling I0511 17:47:27.516678 7 log.go:172] (0xc000e91340) Data frame received for 1 I0511 17:47:27.516742 7 log.go:172] (0xc0002f52c0) (1) Data frame handling I0511 17:47:27.516760 7 log.go:172] (0xc0002f52c0) (1) Data frame sent I0511 17:47:27.516771 7 log.go:172] (0xc000e91340) (0xc0002f52c0) Stream removed, broadcasting: 1 I0511 17:47:27.516843 7 log.go:172] (0xc000e91340) (0xc0002f52c0) Stream removed, broadcasting: 1 I0511 17:47:27.516850 7 log.go:172] (0xc000e91340) (0xc000210640) Stream removed, broadcasting: 3 I0511 17:47:27.516855 7 log.go:172] (0xc000e91340) (0xc002020960) Stream removed, broadcasting: 5 May 11 17:47:27.516: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 11 17:47:27.516: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2776 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:47:27.516: INFO: >>> kubeConfig: /root/.kube/config I0511 17:47:27.516946 7 log.go:172] (0xc000e91340) Go away received I0511 17:47:27.570300 7 log.go:172] (0xc00066b550) (0xc002020dc0) Create stream I0511 17:47:27.570343 7 log.go:172] (0xc00066b550) (0xc002020dc0) Stream added, broadcasting: 1 I0511 17:47:27.572443 7 log.go:172] (0xc00066b550) Reply frame received for 1 I0511 17:47:27.572466 7 log.go:172] (0xc00066b550) (0xc002128b40) Create stream I0511 17:47:27.572475 7 log.go:172] (0xc00066b550) (0xc002128b40) Stream added, broadcasting: 3 I0511 17:47:27.573319 7 log.go:172] (0xc00066b550) Reply frame received for 3 I0511 17:47:27.573340 7 log.go:172] (0xc00066b550) (0xc0017da460) Create stream I0511 17:47:27.573348 7 log.go:172] (0xc00066b550) (0xc0017da460) Stream added, broadcasting: 5 I0511 17:47:27.574034 7 log.go:172] (0xc00066b550) Reply frame received for 5 I0511 17:47:27.635898 7 log.go:172] (0xc00066b550) Data frame received for 5 I0511 17:47:27.635939 7 log.go:172] (0xc0017da460) (5) Data frame handling I0511 17:47:27.635966 7 log.go:172] (0xc00066b550) Data frame received for 3 I0511 17:47:27.635982 7 log.go:172] (0xc002128b40) (3) Data frame handling I0511 17:47:27.636000 7 log.go:172] (0xc002128b40) (3) Data frame sent I0511 17:47:27.636018 7 log.go:172] (0xc00066b550) Data frame received for 3 I0511 17:47:27.636030 7 log.go:172] (0xc002128b40) (3) Data frame handling I0511 17:47:27.637097 7 log.go:172] (0xc00066b550) Data frame received for 1 I0511 17:47:27.637307 7 log.go:172] (0xc002020dc0) (1) Data frame handling I0511 17:47:27.637340 7 log.go:172] (0xc002020dc0) (1) Data frame sent I0511 17:47:27.637374 7 log.go:172] (0xc00066b550) (0xc002020dc0) Stream removed, broadcasting: 1 I0511 17:47:27.637407 7 log.go:172] (0xc00066b550) Go away received I0511 17:47:27.637536 7 log.go:172] (0xc00066b550) (0xc002020dc0) Stream removed, broadcasting: 1 I0511 17:47:27.637567 7 log.go:172] (0xc00066b550) (0xc002128b40) Stream removed, broadcasting: 3 I0511 17:47:27.637579 7 log.go:172] (0xc00066b550) (0xc0017da460) Stream removed, broadcasting: 5 May 11 17:47:27.637: INFO: Exec stderr: "" May 11 17:47:27.637: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2776 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:47:27.637: INFO: >>> kubeConfig: /root/.kube/config I0511 17:47:27.670262 7 log.go:172] (0xc000e5f290) (0xc0017da820) Create stream I0511 17:47:27.670296 7 log.go:172] (0xc000e5f290) (0xc0017da820) Stream added, broadcasting: 1 I0511 17:47:27.672908 7 log.go:172] (0xc000e5f290) Reply frame received for 1 I0511 17:47:27.672968 7 log.go:172] (0xc000e5f290) (0xc002020e60) Create stream I0511 17:47:27.672986 7 log.go:172] (0xc000e5f290) (0xc002020e60) Stream added, broadcasting: 3 I0511 17:47:27.674680 7 log.go:172] (0xc000e5f290) Reply frame received for 3 I0511 17:47:27.674717 7 log.go:172] (0xc000e5f290) (0xc0002f5360) Create stream I0511 17:47:27.674728 7 log.go:172] (0xc000e5f290) (0xc0002f5360) Stream added, broadcasting: 5 I0511 17:47:27.675630 7 log.go:172] (0xc000e5f290) Reply frame received for 5 I0511 17:47:27.731602 7 log.go:172] (0xc000e5f290) Data frame received for 5 I0511 17:47:27.731635 7 log.go:172] (0xc0002f5360) (5) Data frame handling I0511 17:47:27.731666 7 log.go:172] (0xc000e5f290) Data frame received for 3 I0511 17:47:27.731675 7 log.go:172] (0xc002020e60) (3) Data frame handling I0511 17:47:27.731692 7 log.go:172] (0xc002020e60) (3) Data frame sent I0511 17:47:27.731701 7 log.go:172] (0xc000e5f290) Data frame received for 3 I0511 17:47:27.731708 7 log.go:172] (0xc002020e60) (3) Data frame handling I0511 17:47:27.732943 7 log.go:172] (0xc000e5f290) Data frame received for 1 I0511 17:47:27.732957 7 log.go:172] (0xc0017da820) (1) Data frame handling I0511 17:47:27.732978 7 log.go:172] (0xc0017da820) (1) Data frame sent I0511 17:47:27.732995 7 log.go:172] (0xc000e5f290) (0xc0017da820) Stream removed, broadcasting: 1 I0511 17:47:27.733051 7 log.go:172] (0xc000e5f290) Go away received I0511 17:47:27.733090 7 log.go:172] (0xc000e5f290) (0xc0017da820) Stream removed, broadcasting: 1 I0511 17:47:27.733272 7 log.go:172] (0xc000e5f290) (0xc002020e60) Stream removed, broadcasting: 3 I0511 17:47:27.733292 7 log.go:172] (0xc000e5f290) (0xc0002f5360) Stream removed, broadcasting: 5 May 11 17:47:27.733: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 11 17:47:27.733: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2776 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:47:27.733: INFO: >>> kubeConfig: /root/.kube/config I0511 17:47:27.766591 7 log.go:172] (0xc000e5fc30) (0xc0017daaa0) Create stream I0511 17:47:27.766627 7 log.go:172] (0xc000e5fc30) (0xc0017daaa0) Stream added, broadcasting: 1 I0511 17:47:27.769444 7 log.go:172] (0xc000e5fc30) Reply frame received for 1 I0511 17:47:27.769512 7 log.go:172] (0xc000e5fc30) (0xc000210820) Create stream I0511 17:47:27.769546 7 log.go:172] (0xc000e5fc30) (0xc000210820) Stream added, broadcasting: 3 I0511 17:47:27.770400 7 log.go:172] (0xc000e5fc30) Reply frame received for 3 I0511 17:47:27.770442 7 log.go:172] (0xc000e5fc30) (0xc000210960) Create stream I0511 17:47:27.770464 7 log.go:172] (0xc000e5fc30) (0xc000210960) Stream added, broadcasting: 5 I0511 17:47:27.771299 7 log.go:172] (0xc000e5fc30) Reply frame received for 5 I0511 17:47:27.823709 7 log.go:172] (0xc000e5fc30) Data frame received for 3 I0511 17:47:27.823733 7 log.go:172] (0xc000210820) (3) Data frame handling I0511 17:47:27.823748 7 log.go:172] (0xc000210820) (3) Data frame sent I0511 17:47:27.823756 7 log.go:172] (0xc000e5fc30) Data frame received for 3 I0511 17:47:27.823763 7 log.go:172] (0xc000210820) (3) Data frame handling I0511 17:47:27.823801 7 log.go:172] (0xc000e5fc30) Data frame received for 5 I0511 17:47:27.823809 7 log.go:172] (0xc000210960) (5) Data frame handling I0511 17:47:27.825693 7 log.go:172] (0xc000e5fc30) Data frame received for 1 I0511 17:47:27.825720 7 log.go:172] (0xc0017daaa0) (1) Data frame handling I0511 17:47:27.825739 7 log.go:172] (0xc0017daaa0) (1) Data frame sent I0511 17:47:27.825758 7 log.go:172] (0xc000e5fc30) (0xc0017daaa0) Stream removed, broadcasting: 1 I0511 17:47:27.825780 7 log.go:172] (0xc000e5fc30) Go away received I0511 17:47:27.825868 7 log.go:172] (0xc000e5fc30) (0xc0017daaa0) Stream removed, broadcasting: 1 I0511 17:47:27.825903 7 log.go:172] (0xc000e5fc30) (0xc000210820) Stream removed, broadcasting: 3 I0511 17:47:27.825916 7 log.go:172] (0xc000e5fc30) (0xc000210960) Stream removed, broadcasting: 5 May 11 17:47:27.825: INFO: Exec stderr: "" May 11 17:47:27.825: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2776 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:47:27.825: INFO: >>> kubeConfig: /root/.kube/config I0511 17:47:27.858878 7 log.go:172] (0xc001522790) (0xc0002f5900) Create stream I0511 17:47:27.858912 7 log.go:172] (0xc001522790) (0xc0002f5900) Stream added, broadcasting: 1 I0511 17:47:27.862262 7 log.go:172] (0xc001522790) Reply frame received for 1 I0511 17:47:27.862335 7 log.go:172] (0xc001522790) (0xc002021040) Create stream I0511 17:47:27.862356 7 log.go:172] (0xc001522790) (0xc002021040) Stream added, broadcasting: 3 I0511 17:47:27.863559 7 log.go:172] (0xc001522790) Reply frame received for 3 I0511 17:47:27.863604 7 log.go:172] (0xc001522790) (0xc0002f5a40) Create stream I0511 17:47:27.863623 7 log.go:172] (0xc001522790) (0xc0002f5a40) Stream added, broadcasting: 5 I0511 17:47:27.864698 7 log.go:172] (0xc001522790) Reply frame received for 5 I0511 17:47:27.938020 7 log.go:172] (0xc001522790) Data frame received for 5 I0511 17:47:27.938066 7 log.go:172] (0xc0002f5a40) (5) Data frame handling I0511 17:47:27.938098 7 log.go:172] (0xc001522790) Data frame received for 3 I0511 17:47:27.938122 7 log.go:172] (0xc002021040) (3) Data frame handling I0511 17:47:27.938172 7 log.go:172] (0xc002021040) (3) Data frame sent I0511 17:47:27.938264 7 log.go:172] (0xc001522790) Data frame received for 3 I0511 17:47:27.938304 7 log.go:172] (0xc002021040) (3) Data frame handling I0511 17:47:27.939935 7 log.go:172] (0xc001522790) Data frame received for 1 I0511 17:47:27.939963 7 log.go:172] (0xc0002f5900) (1) Data frame handling I0511 17:47:27.939983 7 log.go:172] (0xc0002f5900) (1) Data frame sent I0511 17:47:27.940001 7 log.go:172] (0xc001522790) (0xc0002f5900) Stream removed, broadcasting: 1 I0511 17:47:27.940027 7 log.go:172] (0xc001522790) Go away received I0511 17:47:27.940086 7 log.go:172] (0xc001522790) (0xc0002f5900) Stream removed, broadcasting: 1 I0511 17:47:27.940101 7 log.go:172] (0xc001522790) (0xc002021040) Stream removed, broadcasting: 3 I0511 17:47:27.940112 7 log.go:172] (0xc001522790) (0xc0002f5a40) Stream removed, broadcasting: 5 May 11 17:47:27.940: INFO: Exec stderr: "" May 11 17:47:27.940: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2776 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:47:27.940: INFO: >>> kubeConfig: /root/.kube/config I0511 17:47:28.270367 7 log.go:172] (0xc001bde8f0) (0xc002021400) Create stream I0511 17:47:28.270406 7 log.go:172] (0xc001bde8f0) (0xc002021400) Stream added, broadcasting: 1 I0511 17:47:28.272736 7 log.go:172] (0xc001bde8f0) Reply frame received for 1 I0511 17:47:28.272767 7 log.go:172] (0xc001bde8f0) (0xc0020215e0) Create stream I0511 17:47:28.272777 7 log.go:172] (0xc001bde8f0) (0xc0020215e0) Stream added, broadcasting: 3 I0511 17:47:28.273804 7 log.go:172] (0xc001bde8f0) Reply frame received for 3 I0511 17:47:28.273827 7 log.go:172] (0xc001bde8f0) (0xc002021720) Create stream I0511 17:47:28.273833 7 log.go:172] (0xc001bde8f0) (0xc002021720) Stream added, broadcasting: 5 I0511 17:47:28.274613 7 log.go:172] (0xc001bde8f0) Reply frame received for 5 I0511 17:47:28.340046 7 log.go:172] (0xc001bde8f0) Data frame received for 3 I0511 17:47:28.340094 7 log.go:172] (0xc0020215e0) (3) Data frame handling I0511 17:47:28.340120 7 log.go:172] (0xc0020215e0) (3) Data frame sent I0511 17:47:28.340138 7 log.go:172] (0xc001bde8f0) Data frame received for 3 I0511 17:47:28.340148 7 log.go:172] (0xc0020215e0) (3) Data frame handling I0511 17:47:28.340196 7 log.go:172] (0xc001bde8f0) Data frame received for 5 I0511 17:47:28.340225 7 log.go:172] (0xc002021720) (5) Data frame handling I0511 17:47:28.341584 7 log.go:172] (0xc001bde8f0) Data frame received for 1 I0511 17:47:28.341605 7 log.go:172] (0xc002021400) (1) Data frame handling I0511 17:47:28.341622 7 log.go:172] (0xc002021400) (1) Data frame sent I0511 17:47:28.341633 7 log.go:172] (0xc001bde8f0) (0xc002021400) Stream removed, broadcasting: 1 I0511 17:47:28.341645 7 log.go:172] (0xc001bde8f0) Go away received I0511 17:47:28.341770 7 log.go:172] (0xc001bde8f0) (0xc002021400) Stream removed, broadcasting: 1 I0511 17:47:28.341791 7 log.go:172] (0xc001bde8f0) (0xc0020215e0) Stream removed, broadcasting: 3 I0511 17:47:28.341803 7 log.go:172] (0xc001bde8f0) (0xc002021720) Stream removed, broadcasting: 5 May 11 17:47:28.341: INFO: Exec stderr: "" May 11 17:47:28.341: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2776 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:47:28.341: INFO: >>> kubeConfig: /root/.kube/config I0511 17:47:28.369853 7 log.go:172] (0xc001bdf550) (0xc002021ae0) Create stream I0511 17:47:28.369874 7 log.go:172] (0xc001bdf550) (0xc002021ae0) Stream added, broadcasting: 1 I0511 17:47:28.372375 7 log.go:172] (0xc001bdf550) Reply frame received for 1 I0511 17:47:28.372411 7 log.go:172] (0xc001bdf550) (0xc000210be0) Create stream I0511 17:47:28.372426 7 log.go:172] (0xc001bdf550) (0xc000210be0) Stream added, broadcasting: 3 I0511 17:47:28.373562 7 log.go:172] (0xc001bdf550) Reply frame received for 3 I0511 17:47:28.373580 7 log.go:172] (0xc001bdf550) (0xc002021b80) Create stream I0511 17:47:28.373585 7 log.go:172] (0xc001bdf550) (0xc002021b80) Stream added, broadcasting: 5 I0511 17:47:28.374468 7 log.go:172] (0xc001bdf550) Reply frame received for 5 I0511 17:47:28.433103 7 log.go:172] (0xc001bdf550) Data frame received for 5 I0511 17:47:28.433306 7 log.go:172] (0xc002021b80) (5) Data frame handling I0511 17:47:28.433355 7 log.go:172] (0xc001bdf550) Data frame received for 3 I0511 17:47:28.433383 7 log.go:172] (0xc000210be0) (3) Data frame handling I0511 17:47:28.433399 7 log.go:172] (0xc000210be0) (3) Data frame sent I0511 17:47:28.433406 7 log.go:172] (0xc001bdf550) Data frame received for 3 I0511 17:47:28.433411 7 log.go:172] (0xc000210be0) (3) Data frame handling I0511 17:47:28.434500 7 log.go:172] (0xc001bdf550) Data frame received for 1 I0511 17:47:28.434526 7 log.go:172] (0xc002021ae0) (1) Data frame handling I0511 17:47:28.434537 7 log.go:172] (0xc002021ae0) (1) Data frame sent I0511 17:47:28.434551 7 log.go:172] (0xc001bdf550) (0xc002021ae0) Stream removed, broadcasting: 1 I0511 17:47:28.434574 7 log.go:172] (0xc001bdf550) Go away received I0511 17:47:28.434742 7 log.go:172] (0xc001bdf550) (0xc002021ae0) Stream removed, broadcasting: 1 I0511 17:47:28.434761 7 log.go:172] (0xc001bdf550) (0xc000210be0) Stream removed, broadcasting: 3 I0511 17:47:28.434774 7 log.go:172] (0xc001bdf550) (0xc002021b80) Stream removed, broadcasting: 5 May 11 17:47:28.434: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:47:28.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2776" for this suite. May 11 17:48:12.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:48:12.232: INFO: namespace e2e-kubelet-etc-hosts-2776 deletion completed in 42.958051258s • [SLOW TEST:70.349 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:48:12.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 11 17:48:12.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9893' May 11 17:48:12.544: INFO: stderr: "" May 11 17:48:12.544: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 17:48:12.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9893' May 11 17:48:12.658: INFO: stderr: "" May 11 17:48:12.658: INFO: stdout: "update-demo-nautilus-t72wm update-demo-nautilus-wk2vt " May 11 17:48:12.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t72wm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9893' May 11 17:48:12.928: INFO: stderr: "" May 11 17:48:12.928: INFO: stdout: "" May 11 17:48:12.928: INFO: update-demo-nautilus-t72wm is created but not running May 11 17:48:17.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9893' May 11 17:48:18.112: INFO: stderr: "" May 11 17:48:18.112: INFO: stdout: "update-demo-nautilus-t72wm update-demo-nautilus-wk2vt " May 11 17:48:18.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t72wm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9893' May 11 17:48:18.307: INFO: stderr: "" May 11 17:48:18.307: INFO: stdout: "true" May 11 17:48:18.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t72wm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9893' May 11 17:48:18.407: INFO: stderr: "" May 11 17:48:18.407: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 17:48:18.407: INFO: validating pod update-demo-nautilus-t72wm May 11 17:48:18.411: INFO: got data: { "image": "nautilus.jpg" } May 11 17:48:18.411: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 17:48:18.411: INFO: update-demo-nautilus-t72wm is verified up and running May 11 17:48:18.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wk2vt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9893' May 11 17:48:18.512: INFO: stderr: "" May 11 17:48:18.512: INFO: stdout: "" May 11 17:48:18.512: INFO: update-demo-nautilus-wk2vt is created but not running May 11 17:48:23.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9893' May 11 17:48:23.625: INFO: stderr: "" May 11 17:48:23.625: INFO: stdout: "update-demo-nautilus-t72wm update-demo-nautilus-wk2vt " May 11 17:48:23.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t72wm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9893' May 11 17:48:23.710: INFO: stderr: "" May 11 17:48:23.710: INFO: stdout: "true" May 11 17:48:23.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t72wm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9893' May 11 17:48:23.816: INFO: stderr: "" May 11 17:48:23.816: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 17:48:23.816: INFO: validating pod update-demo-nautilus-t72wm May 11 17:48:23.819: INFO: got data: { "image": "nautilus.jpg" } May 11 17:48:23.819: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 17:48:23.819: INFO: update-demo-nautilus-t72wm is verified up and running May 11 17:48:23.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wk2vt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9893' May 11 17:48:23.911: INFO: stderr: "" May 11 17:48:23.911: INFO: stdout: "true" May 11 17:48:23.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wk2vt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9893' May 11 17:48:23.998: INFO: stderr: "" May 11 17:48:23.998: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 17:48:23.998: INFO: validating pod update-demo-nautilus-wk2vt May 11 17:48:24.002: INFO: got data: { "image": "nautilus.jpg" } May 11 17:48:24.002: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 17:48:24.002: INFO: update-demo-nautilus-wk2vt is verified up and running STEP: using delete to clean up resources May 11 17:48:24.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9893' May 11 17:48:24.268: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 17:48:24.268: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 17:48:24.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9893' May 11 17:48:24.541: INFO: stderr: "No resources found.\n" May 11 17:48:24.541: INFO: stdout: "" May 11 17:48:24.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9893 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 17:48:25.092: INFO: stderr: "" May 11 17:48:25.092: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:48:25.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9893" for this suite. May 11 17:48:35.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:48:35.178: INFO: namespace kubectl-9893 deletion completed in 10.083323523s • [SLOW TEST:22.946 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:48:35.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-eea94a8a-4c33-4f34-a704-f108d34c8ea1 STEP: Creating a pod to test consume configMaps May 11 17:48:35.520: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f262e0c-3507-435c-81aa-34cad4c3518e" in namespace "projected-8995" to be "success or failure" May 11 17:48:35.583: INFO: Pod "pod-projected-configmaps-0f262e0c-3507-435c-81aa-34cad4c3518e": Phase="Pending", Reason="", readiness=false. Elapsed: 63.653413ms May 11 17:48:37.587: INFO: Pod "pod-projected-configmaps-0f262e0c-3507-435c-81aa-34cad4c3518e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067484562s May 11 17:48:39.818: INFO: Pod "pod-projected-configmaps-0f262e0c-3507-435c-81aa-34cad4c3518e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298405981s May 11 17:48:41.985: INFO: Pod "pod-projected-configmaps-0f262e0c-3507-435c-81aa-34cad4c3518e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.465719661s May 11 17:48:43.990: INFO: Pod "pod-projected-configmaps-0f262e0c-3507-435c-81aa-34cad4c3518e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470113516s May 11 17:48:45.993: INFO: Pod "pod-projected-configmaps-0f262e0c-3507-435c-81aa-34cad4c3518e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.473661597s STEP: Saw pod success May 11 17:48:45.993: INFO: Pod "pod-projected-configmaps-0f262e0c-3507-435c-81aa-34cad4c3518e" satisfied condition "success or failure" May 11 17:48:45.996: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-0f262e0c-3507-435c-81aa-34cad4c3518e container projected-configmap-volume-test: STEP: delete the pod May 11 17:48:46.124: INFO: Waiting for pod pod-projected-configmaps-0f262e0c-3507-435c-81aa-34cad4c3518e to disappear May 11 17:48:46.167: INFO: Pod pod-projected-configmaps-0f262e0c-3507-435c-81aa-34cad4c3518e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:48:46.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8995" for this suite. May 11 17:48:52.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:48:52.390: INFO: namespace projected-8995 deletion completed in 6.219338142s • [SLOW TEST:17.212 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:48:52.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 17:48:52.815: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08478426-c5e1-4932-9fe4-d0f72ad11c5c" in namespace "projected-3962" to be "success or failure" May 11 17:48:52.977: INFO: Pod "downwardapi-volume-08478426-c5e1-4932-9fe4-d0f72ad11c5c": Phase="Pending", Reason="", readiness=false. Elapsed: 161.848179ms May 11 17:48:55.327: INFO: Pod "downwardapi-volume-08478426-c5e1-4932-9fe4-d0f72ad11c5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.511488445s May 11 17:48:57.332: INFO: Pod "downwardapi-volume-08478426-c5e1-4932-9fe4-d0f72ad11c5c": Phase="Running", Reason="", readiness=true. Elapsed: 4.51635557s May 11 17:48:59.393: INFO: Pod "downwardapi-volume-08478426-c5e1-4932-9fe4-d0f72ad11c5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577825326s STEP: Saw pod success May 11 17:48:59.393: INFO: Pod "downwardapi-volume-08478426-c5e1-4932-9fe4-d0f72ad11c5c" satisfied condition "success or failure" May 11 17:48:59.396: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-08478426-c5e1-4932-9fe4-d0f72ad11c5c container client-container: STEP: delete the pod May 11 17:48:59.436: INFO: Waiting for pod downwardapi-volume-08478426-c5e1-4932-9fe4-d0f72ad11c5c to disappear May 11 17:48:59.470: INFO: Pod downwardapi-volume-08478426-c5e1-4932-9fe4-d0f72ad11c5c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:48:59.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3962" for this suite. May 11 17:49:07.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:49:07.555: INFO: namespace projected-3962 deletion completed in 8.081022138s • [SLOW TEST:15.165 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:49:07.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0511 17:49:11.770222 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 17:49:11.770: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:49:11.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4511" for this suite. May 11 17:49:18.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:49:18.852: INFO: namespace gc-4511 deletion completed in 7.07928341s • [SLOW TEST:11.297 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:49:18.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 17:49:19.566: INFO: Waiting up to 5m0s for pod "pod-38092742-cfae-4ce0-bd17-7aa176be76a5" in namespace "emptydir-331" to be "success or failure" May 11 17:49:19.920: INFO: Pod "pod-38092742-cfae-4ce0-bd17-7aa176be76a5": Phase="Pending", Reason="", readiness=false. Elapsed: 354.072994ms May 11 17:49:22.131: INFO: Pod "pod-38092742-cfae-4ce0-bd17-7aa176be76a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.564413353s May 11 17:49:24.135: INFO: Pod "pod-38092742-cfae-4ce0-bd17-7aa176be76a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568456128s May 11 17:49:26.172: INFO: Pod "pod-38092742-cfae-4ce0-bd17-7aa176be76a5": Phase="Running", Reason="", readiness=true. Elapsed: 6.60543001s May 11 17:49:28.175: INFO: Pod "pod-38092742-cfae-4ce0-bd17-7aa176be76a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.609299716s STEP: Saw pod success May 11 17:49:28.176: INFO: Pod "pod-38092742-cfae-4ce0-bd17-7aa176be76a5" satisfied condition "success or failure" May 11 17:49:28.178: INFO: Trying to get logs from node iruya-worker2 pod pod-38092742-cfae-4ce0-bd17-7aa176be76a5 container test-container: STEP: delete the pod May 11 17:49:28.300: INFO: Waiting for pod pod-38092742-cfae-4ce0-bd17-7aa176be76a5 to disappear May 11 17:49:28.363: INFO: Pod pod-38092742-cfae-4ce0-bd17-7aa176be76a5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:49:28.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-331" for this suite. May 11 17:49:36.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:49:37.058: INFO: namespace emptydir-331 deletion completed in 8.691317058s • [SLOW TEST:18.206 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:49:37.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-19a26d3c-4a38-462f-baf2-85faee4bb016 STEP: Creating a pod to test consume secrets May 11 17:49:37.479: INFO: Waiting up to 5m0s for pod "pod-secrets-cf2c7289-c878-4699-8970-c40fdf40e123" in namespace "secrets-310" to be "success or failure" May 11 17:49:37.530: INFO: Pod "pod-secrets-cf2c7289-c878-4699-8970-c40fdf40e123": Phase="Pending", Reason="", readiness=false. Elapsed: 51.564825ms May 11 17:49:39.561: INFO: Pod "pod-secrets-cf2c7289-c878-4699-8970-c40fdf40e123": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082219779s May 11 17:49:41.563: INFO: Pod "pod-secrets-cf2c7289-c878-4699-8970-c40fdf40e123": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083950985s May 11 17:49:43.930: INFO: Pod "pod-secrets-cf2c7289-c878-4699-8970-c40fdf40e123": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451336547s May 11 17:49:45.935: INFO: Pod "pod-secrets-cf2c7289-c878-4699-8970-c40fdf40e123": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.455827504s STEP: Saw pod success May 11 17:49:45.935: INFO: Pod "pod-secrets-cf2c7289-c878-4699-8970-c40fdf40e123" satisfied condition "success or failure" May 11 17:49:45.939: INFO: Trying to get logs from node iruya-worker pod pod-secrets-cf2c7289-c878-4699-8970-c40fdf40e123 container secret-volume-test: STEP: delete the pod May 11 17:49:46.193: INFO: Waiting for pod pod-secrets-cf2c7289-c878-4699-8970-c40fdf40e123 to disappear May 11 17:49:46.417: INFO: Pod pod-secrets-cf2c7289-c878-4699-8970-c40fdf40e123 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:49:46.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-310" for this suite. May 11 17:49:52.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:49:52.522: INFO: namespace secrets-310 deletion completed in 6.101283576s • [SLOW TEST:15.464 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:49:52.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 11 17:49:59.825: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 11 17:50:14.914: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:50:14.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7683" for this suite. May 11 17:50:20.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:50:21.002: INFO: namespace pods-7683 deletion completed in 6.082226549s • [SLOW TEST:28.480 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:50:21.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 11 17:50:21.221: INFO: Waiting up to 5m0s for pod "pod-6232ad71-eabb-45fe-9006-205041ef0d7a" in namespace "emptydir-5422" to be "success or failure" May 11 17:50:21.255: INFO: Pod "pod-6232ad71-eabb-45fe-9006-205041ef0d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.045518ms May 11 17:50:23.259: INFO: Pod "pod-6232ad71-eabb-45fe-9006-205041ef0d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038112085s May 11 17:50:25.263: INFO: Pod "pod-6232ad71-eabb-45fe-9006-205041ef0d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041916937s May 11 17:50:27.266: INFO: Pod "pod-6232ad71-eabb-45fe-9006-205041ef0d7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044707518s STEP: Saw pod success May 11 17:50:27.266: INFO: Pod "pod-6232ad71-eabb-45fe-9006-205041ef0d7a" satisfied condition "success or failure" May 11 17:50:27.267: INFO: Trying to get logs from node iruya-worker pod pod-6232ad71-eabb-45fe-9006-205041ef0d7a container test-container: STEP: delete the pod May 11 17:50:27.417: INFO: Waiting for pod pod-6232ad71-eabb-45fe-9006-205041ef0d7a to disappear May 11 17:50:27.420: INFO: Pod pod-6232ad71-eabb-45fe-9006-205041ef0d7a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:50:27.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5422" for this suite. May 11 17:50:35.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:50:35.487: INFO: namespace emptydir-5422 deletion completed in 8.064648367s • [SLOW TEST:14.485 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:50:35.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-4611 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4611 to expose endpoints map[] May 11 17:50:35.638: INFO: Get endpoints failed (63.786197ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 11 17:50:36.740: INFO: successfully validated that service endpoint-test2 in namespace services-4611 exposes endpoints map[] (1.165735491s elapsed) STEP: Creating pod pod1 in namespace services-4611 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4611 to expose endpoints map[pod1:[80]] May 11 17:50:41.037: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.292903871s elapsed, will retry) May 11 17:50:42.042: INFO: successfully validated that service endpoint-test2 in namespace services-4611 exposes endpoints map[pod1:[80]] (5.297702711s elapsed) STEP: Creating pod pod2 in namespace services-4611 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4611 to expose endpoints map[pod1:[80] pod2:[80]] May 11 17:50:47.065: INFO: Unexpected endpoints: found map[5217afbd-311c-4fea-8709-d2eebbe9f17c:[80]], expected map[pod1:[80] pod2:[80]] (5.019489599s elapsed, will retry) May 11 17:50:49.917: INFO: successfully validated that service endpoint-test2 in namespace services-4611 exposes endpoints map[pod1:[80] pod2:[80]] (7.871537126s elapsed) STEP: Deleting pod pod1 in namespace services-4611 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4611 to expose endpoints map[pod2:[80]] May 11 17:50:50.969: INFO: successfully validated that service endpoint-test2 in namespace services-4611 exposes endpoints map[pod2:[80]] (1.048873509s elapsed) STEP: Deleting pod pod2 in namespace services-4611 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4611 to expose endpoints map[] May 11 17:50:52.015: INFO: successfully validated that service endpoint-test2 in namespace services-4611 exposes endpoints map[] (1.042475823s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:50:52.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4611" for this suite. May 11 17:51:23.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:51:23.608: INFO: namespace services-4611 deletion completed in 30.748209883s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:48.120 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:51:23.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-bf76452a-6227-4a94-9773-7fe052020638 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-bf76452a-6227-4a94-9773-7fe052020638 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:52:57.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3170" for this suite. May 11 17:53:22.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:53:22.267: INFO: namespace configmap-3170 deletion completed in 24.413374203s • [SLOW TEST:118.659 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:53:22.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 11 17:53:22.572: INFO: Waiting up to 5m0s for pod "client-containers-599675c2-3601-4783-ae91-4e3e29de8c25" in namespace "containers-3018" to be "success or failure" May 11 17:53:22.588: INFO: Pod "client-containers-599675c2-3601-4783-ae91-4e3e29de8c25": Phase="Pending", Reason="", readiness=false. Elapsed: 15.768897ms May 11 17:53:25.673: INFO: Pod "client-containers-599675c2-3601-4783-ae91-4e3e29de8c25": Phase="Pending", Reason="", readiness=false. Elapsed: 3.100194576s May 11 17:53:27.676: INFO: Pod "client-containers-599675c2-3601-4783-ae91-4e3e29de8c25": Phase="Pending", Reason="", readiness=false. Elapsed: 5.103318338s May 11 17:53:30.157: INFO: Pod "client-containers-599675c2-3601-4783-ae91-4e3e29de8c25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.585103324s STEP: Saw pod success May 11 17:53:30.157: INFO: Pod "client-containers-599675c2-3601-4783-ae91-4e3e29de8c25" satisfied condition "success or failure" May 11 17:53:30.159: INFO: Trying to get logs from node iruya-worker pod client-containers-599675c2-3601-4783-ae91-4e3e29de8c25 container test-container: STEP: delete the pod May 11 17:53:30.751: INFO: Waiting for pod client-containers-599675c2-3601-4783-ae91-4e3e29de8c25 to disappear May 11 17:53:31.096: INFO: Pod client-containers-599675c2-3601-4783-ae91-4e3e29de8c25 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:53:31.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3018" for this suite. May 11 17:53:39.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:53:39.216: INFO: namespace containers-3018 deletion completed in 8.115696973s • [SLOW TEST:16.948 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:53:39.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-311 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-311 to expose endpoints map[] May 11 17:53:40.022: INFO: Get endpoints failed (89.76794ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 11 17:53:41.127: INFO: successfully validated that service multi-endpoint-test in namespace services-311 exposes endpoints map[] (1.194394541s elapsed) STEP: Creating pod pod1 in namespace services-311 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-311 to expose endpoints map[pod1:[100]] May 11 17:53:46.207: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.074317913s elapsed, will retry) May 11 17:53:48.273: INFO: successfully validated that service multi-endpoint-test in namespace services-311 exposes endpoints map[pod1:[100]] (7.139645855s elapsed) STEP: Creating pod pod2 in namespace services-311 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-311 to expose endpoints map[pod1:[100] pod2:[101]] May 11 17:53:54.833: INFO: Unexpected endpoints: found map[2a7c51d0-b47c-4126-be2b-334007fab546:[100]], expected map[pod1:[100] pod2:[101]] (6.55702763s elapsed, will retry) May 11 17:53:55.848: INFO: successfully validated that service multi-endpoint-test in namespace services-311 exposes endpoints map[pod1:[100] pod2:[101]] (7.572033234s elapsed) STEP: Deleting pod pod1 in namespace services-311 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-311 to expose endpoints map[pod2:[101]] May 11 17:53:57.577: INFO: successfully validated that service multi-endpoint-test in namespace services-311 exposes endpoints map[pod2:[101]] (1.724502152s elapsed) STEP: Deleting pod pod2 in namespace services-311 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-311 to expose endpoints map[] May 11 17:53:57.810: INFO: successfully validated that service multi-endpoint-test in namespace services-311 exposes endpoints map[] (229.650076ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:53:58.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-311" for this suite. May 11 17:54:24.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:54:24.923: INFO: namespace services-311 deletion completed in 26.49592489s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:45.707 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:54:24.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:54:33.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4061" for this suite. May 11 17:55:13.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:55:13.323: INFO: namespace kubelet-test-4061 deletion completed in 40.133429643s • [SLOW TEST:48.399 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:55:13.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 17:55:15.323: INFO: Waiting up to 5m0s for pod "downwardapi-volume-740a309d-ef1f-4370-8c1c-b25c32705159" in namespace "downward-api-1303" to be "success or failure" May 11 17:55:15.387: INFO: Pod "downwardapi-volume-740a309d-ef1f-4370-8c1c-b25c32705159": Phase="Pending", Reason="", readiness=false. Elapsed: 64.116622ms May 11 17:55:17.487: INFO: Pod "downwardapi-volume-740a309d-ef1f-4370-8c1c-b25c32705159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163753461s May 11 17:55:19.882: INFO: Pod "downwardapi-volume-740a309d-ef1f-4370-8c1c-b25c32705159": Phase="Pending", Reason="", readiness=false. Elapsed: 4.559315686s May 11 17:55:21.885: INFO: Pod "downwardapi-volume-740a309d-ef1f-4370-8c1c-b25c32705159": Phase="Pending", Reason="", readiness=false. Elapsed: 6.562411207s May 11 17:55:23.890: INFO: Pod "downwardapi-volume-740a309d-ef1f-4370-8c1c-b25c32705159": Phase="Running", Reason="", readiness=true. Elapsed: 8.566594624s May 11 17:55:25.893: INFO: Pod "downwardapi-volume-740a309d-ef1f-4370-8c1c-b25c32705159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.569963246s STEP: Saw pod success May 11 17:55:25.893: INFO: Pod "downwardapi-volume-740a309d-ef1f-4370-8c1c-b25c32705159" satisfied condition "success or failure" May 11 17:55:25.895: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-740a309d-ef1f-4370-8c1c-b25c32705159 container client-container: STEP: delete the pod May 11 17:55:25.985: INFO: Waiting for pod downwardapi-volume-740a309d-ef1f-4370-8c1c-b25c32705159 to disappear May 11 17:55:26.091: INFO: Pod downwardapi-volume-740a309d-ef1f-4370-8c1c-b25c32705159 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:55:26.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1303" for this suite. May 11 17:55:34.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:55:34.178: INFO: namespace downward-api-1303 deletion completed in 8.082776414s • [SLOW TEST:20.855 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:55:34.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-f7v9 STEP: Creating a pod to test atomic-volume-subpath May 11 17:55:34.288: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-f7v9" in namespace "subpath-6831" to be "success or failure" May 11 17:55:34.338: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Pending", Reason="", readiness=false. Elapsed: 49.8205ms May 11 17:55:36.342: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053801588s May 11 17:55:38.345: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056513517s May 11 17:55:40.349: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Running", Reason="", readiness=true. Elapsed: 6.060482076s May 11 17:55:42.354: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Running", Reason="", readiness=true. Elapsed: 8.065085253s May 11 17:55:44.554: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Running", Reason="", readiness=true. Elapsed: 10.265210178s May 11 17:55:46.557: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Running", Reason="", readiness=true. Elapsed: 12.268410002s May 11 17:55:48.643: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Running", Reason="", readiness=true. Elapsed: 14.355005651s May 11 17:55:50.647: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Running", Reason="", readiness=true. Elapsed: 16.358837656s May 11 17:55:52.652: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Running", Reason="", readiness=true. Elapsed: 18.363369287s May 11 17:55:55.499: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Running", Reason="", readiness=true. Elapsed: 21.21095042s May 11 17:55:57.734: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Running", Reason="", readiness=true. Elapsed: 23.445379657s May 11 17:55:59.764: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Running", Reason="", readiness=true. Elapsed: 25.475480543s May 11 17:56:01.768: INFO: Pod "pod-subpath-test-downwardapi-f7v9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.479281669s STEP: Saw pod success May 11 17:56:01.768: INFO: Pod "pod-subpath-test-downwardapi-f7v9" satisfied condition "success or failure" May 11 17:56:01.770: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-f7v9 container test-container-subpath-downwardapi-f7v9: STEP: delete the pod May 11 17:56:03.047: INFO: Waiting for pod pod-subpath-test-downwardapi-f7v9 to disappear May 11 17:56:03.471: INFO: Pod pod-subpath-test-downwardapi-f7v9 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-f7v9 May 11 17:56:03.471: INFO: Deleting pod "pod-subpath-test-downwardapi-f7v9" in namespace "subpath-6831" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:56:03.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6831" for this suite. May 11 17:56:12.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:56:12.354: INFO: namespace subpath-6831 deletion completed in 8.458995608s • [SLOW TEST:38.176 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:56:12.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-7e907873-eadd-4711-8788-b893099607dd STEP: Creating a pod to test consume configMaps May 11 17:56:12.526: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0" in namespace "projected-1725" to be "success or failure" May 11 17:56:12.594: INFO: Pod "pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0": Phase="Pending", Reason="", readiness=false. Elapsed: 67.40675ms May 11 17:56:14.607: INFO: Pod "pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080880262s May 11 17:56:16.611: INFO: Pod "pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084704895s May 11 17:56:18.741: INFO: Pod "pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21454007s May 11 17:56:20.745: INFO: Pod "pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.218176077s May 11 17:56:22.748: INFO: Pod "pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221871766s May 11 17:56:24.883: INFO: Pod "pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.356641644s May 11 17:56:27.532: INFO: Pod "pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.005311572s STEP: Saw pod success May 11 17:56:27.532: INFO: Pod "pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0" satisfied condition "success or failure" May 11 17:56:27.536: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0 container projected-configmap-volume-test: STEP: delete the pod May 11 17:56:29.580: INFO: Waiting for pod pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0 to disappear May 11 17:56:30.257: INFO: Pod pod-projected-configmaps-12491463-9edb-45d9-884b-baf352a944a0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:56:30.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1725" for this suite. May 11 17:56:45.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:56:46.646: INFO: namespace projected-1725 deletion completed in 15.970136224s • [SLOW TEST:34.291 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:56:46.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 17:56:48.373: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:56:48.848: INFO: Number of nodes with available pods: 0 May 11 17:56:48.848: INFO: Node iruya-worker is running more than one daemon pod May 11 17:56:50.221: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:56:50.669: INFO: Number of nodes with available pods: 0 May 11 17:56:50.669: INFO: Node iruya-worker is running more than one daemon pod May 11 17:56:51.351: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:56:51.597: INFO: Number of nodes with available pods: 0 May 11 17:56:51.597: INFO: Node iruya-worker is running more than one daemon pod May 11 17:56:52.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:56:52.185: INFO: Number of nodes with available pods: 0 May 11 17:56:52.185: INFO: Node iruya-worker is running more than one daemon pod May 11 17:56:53.755: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:56:53.757: INFO: Number of nodes with available pods: 0 May 11 17:56:53.757: INFO: Node iruya-worker is running more than one daemon pod May 11 17:56:54.179: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:56:54.262: INFO: Number of nodes with available pods: 0 May 11 17:56:54.262: INFO: Node iruya-worker is running more than one daemon pod May 11 17:56:55.065: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:56:55.068: INFO: Number of nodes with available pods: 0 May 11 17:56:55.068: INFO: Node iruya-worker is running more than one daemon pod May 11 17:56:55.992: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:56:55.995: INFO: Number of nodes with available pods: 0 May 11 17:56:55.995: INFO: Node iruya-worker is running more than one daemon pod May 11 17:56:57.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:56:57.211: INFO: Number of nodes with available pods: 0 May 11 17:56:57.211: INFO: Node iruya-worker is running more than one daemon pod May 11 17:56:58.329: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:56:58.412: INFO: Number of nodes with available pods: 0 May 11 17:56:58.412: INFO: Node iruya-worker is running more than one daemon pod May 11 17:56:59.532: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:56:59.892: INFO: Number of nodes with available pods: 1 May 11 17:56:59.892: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:00.921: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:00.923: INFO: Number of nodes with available pods: 2 May 11 17:57:00.924: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 11 17:57:01.478: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:01.860: INFO: Number of nodes with available pods: 1 May 11 17:57:01.860: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:02.865: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:02.869: INFO: Number of nodes with available pods: 1 May 11 17:57:02.869: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:04.016: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:04.019: INFO: Number of nodes with available pods: 1 May 11 17:57:04.019: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:04.864: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:04.867: INFO: Number of nodes with available pods: 1 May 11 17:57:04.867: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:05.940: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:06.505: INFO: Number of nodes with available pods: 1 May 11 17:57:06.505: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:07.382: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:07.442: INFO: Number of nodes with available pods: 1 May 11 17:57:07.442: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:08.693: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:09.010: INFO: Number of nodes with available pods: 1 May 11 17:57:09.010: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:10.371: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:10.681: INFO: Number of nodes with available pods: 1 May 11 17:57:10.681: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:11.263: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:12.114: INFO: Number of nodes with available pods: 1 May 11 17:57:12.114: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:13.401: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:13.455: INFO: Number of nodes with available pods: 1 May 11 17:57:13.455: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:13.864: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:13.867: INFO: Number of nodes with available pods: 1 May 11 17:57:13.867: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:14.962: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:14.965: INFO: Number of nodes with available pods: 1 May 11 17:57:14.965: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:15.892: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:15.895: INFO: Number of nodes with available pods: 1 May 11 17:57:15.895: INFO: Node iruya-worker is running more than one daemon pod May 11 17:57:17.072: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:57:17.076: INFO: Number of nodes with available pods: 2 May 11 17:57:17.076: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2838, will wait for the garbage collector to delete the pods May 11 17:57:17.137: INFO: Deleting DaemonSet.extensions daemon-set took: 6.821979ms May 11 17:57:17.438: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.268136ms May 11 17:57:22.940: INFO: Number of nodes with available pods: 0 May 11 17:57:22.940: INFO: Number of running nodes: 0, number of available pods: 0 May 11 17:57:22.942: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2838/daemonsets","resourceVersion":"10293188"},"items":null} May 11 17:57:22.944: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2838/pods","resourceVersion":"10293188"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:57:22.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2838" for this suite. May 11 17:57:33.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:57:33.486: INFO: namespace daemonsets-2838 deletion completed in 10.532117625s • [SLOW TEST:46.840 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:57:33.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 11 17:57:34.402: INFO: Waiting up to 5m0s for pod "var-expansion-a0499687-623e-48b5-9828-a51e25a86ac8" in namespace "var-expansion-6780" to be "success or failure" May 11 17:57:35.017: INFO: Pod "var-expansion-a0499687-623e-48b5-9828-a51e25a86ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 614.941346ms May 11 17:57:37.022: INFO: Pod "var-expansion-a0499687-623e-48b5-9828-a51e25a86ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.620671303s May 11 17:57:39.027: INFO: Pod "var-expansion-a0499687-623e-48b5-9828-a51e25a86ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.625510138s May 11 17:57:41.202: INFO: Pod "var-expansion-a0499687-623e-48b5-9828-a51e25a86ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.799960453s May 11 17:57:43.215: INFO: Pod "var-expansion-a0499687-623e-48b5-9828-a51e25a86ac8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.813669048s STEP: Saw pod success May 11 17:57:43.215: INFO: Pod "var-expansion-a0499687-623e-48b5-9828-a51e25a86ac8" satisfied condition "success or failure" May 11 17:57:43.218: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-a0499687-623e-48b5-9828-a51e25a86ac8 container dapi-container: STEP: delete the pod May 11 17:57:43.359: INFO: Waiting for pod var-expansion-a0499687-623e-48b5-9828-a51e25a86ac8 to disappear May 11 17:57:43.579: INFO: Pod var-expansion-a0499687-623e-48b5-9828-a51e25a86ac8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:57:43.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6780" for this suite. May 11 17:57:49.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:57:50.075: INFO: namespace var-expansion-6780 deletion completed in 6.491779672s • [SLOW TEST:16.589 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:57:50.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-844 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 17:57:50.215: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 17:58:17.120: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.110:8080/dial?request=hostName&protocol=http&host=10.244.2.109&port=8080&tries=1'] Namespace:pod-network-test-844 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:58:17.120: INFO: >>> kubeConfig: /root/.kube/config I0511 17:58:17.154495 7 log.go:172] (0xc00113a6e0) (0xc001b1fae0) Create stream I0511 17:58:17.154529 7 log.go:172] (0xc00113a6e0) (0xc001b1fae0) Stream added, broadcasting: 1 I0511 17:58:17.156542 7 log.go:172] (0xc00113a6e0) Reply frame received for 1 I0511 17:58:17.156593 7 log.go:172] (0xc00113a6e0) (0xc002021f40) Create stream I0511 17:58:17.156604 7 log.go:172] (0xc00113a6e0) (0xc002021f40) Stream added, broadcasting: 3 I0511 17:58:17.157722 7 log.go:172] (0xc00113a6e0) Reply frame received for 3 I0511 17:58:17.157773 7 log.go:172] (0xc00113a6e0) (0xc001b1fb80) Create stream I0511 17:58:17.157795 7 log.go:172] (0xc00113a6e0) (0xc001b1fb80) Stream added, broadcasting: 5 I0511 17:58:17.158748 7 log.go:172] (0xc00113a6e0) Reply frame received for 5 I0511 17:58:17.241797 7 log.go:172] (0xc00113a6e0) Data frame received for 3 I0511 17:58:17.241823 7 log.go:172] (0xc002021f40) (3) Data frame handling I0511 17:58:17.241838 7 log.go:172] (0xc002021f40) (3) Data frame sent I0511 17:58:17.242344 7 log.go:172] (0xc00113a6e0) Data frame received for 5 I0511 17:58:17.242365 7 log.go:172] (0xc001b1fb80) (5) Data frame handling I0511 17:58:17.242617 7 log.go:172] (0xc00113a6e0) Data frame received for 3 I0511 17:58:17.242640 7 log.go:172] (0xc002021f40) (3) Data frame handling I0511 17:58:17.244287 7 log.go:172] (0xc00113a6e0) Data frame received for 1 I0511 17:58:17.244306 7 log.go:172] (0xc001b1fae0) (1) Data frame handling I0511 17:58:17.244322 7 log.go:172] (0xc001b1fae0) (1) Data frame sent I0511 17:58:17.244336 7 log.go:172] (0xc00113a6e0) (0xc001b1fae0) Stream removed, broadcasting: 1 I0511 17:58:17.244364 7 log.go:172] (0xc00113a6e0) Go away received I0511 17:58:17.244453 7 log.go:172] (0xc00113a6e0) (0xc001b1fae0) Stream removed, broadcasting: 1 I0511 17:58:17.244479 7 log.go:172] (0xc00113a6e0) (0xc002021f40) Stream removed, broadcasting: 3 I0511 17:58:17.244494 7 log.go:172] (0xc00113a6e0) (0xc001b1fb80) Stream removed, broadcasting: 5 May 11 17:58:17.244: INFO: Waiting for endpoints: map[] May 11 17:58:17.247: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.110:8080/dial?request=hostName&protocol=http&host=10.244.1.170&port=8080&tries=1'] Namespace:pod-network-test-844 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 17:58:17.247: INFO: >>> kubeConfig: /root/.kube/config I0511 17:58:17.277869 7 log.go:172] (0xc001176c60) (0xc0023c23c0) Create stream I0511 17:58:17.277896 7 log.go:172] (0xc001176c60) (0xc0023c23c0) Stream added, broadcasting: 1 I0511 17:58:17.279363 7 log.go:172] (0xc001176c60) Reply frame received for 1 I0511 17:58:17.279418 7 log.go:172] (0xc001176c60) (0xc0023c2460) Create stream I0511 17:58:17.279426 7 log.go:172] (0xc001176c60) (0xc0023c2460) Stream added, broadcasting: 3 I0511 17:58:17.280274 7 log.go:172] (0xc001176c60) Reply frame received for 3 I0511 17:58:17.280310 7 log.go:172] (0xc001176c60) (0xc002d5d5e0) Create stream I0511 17:58:17.280326 7 log.go:172] (0xc001176c60) (0xc002d5d5e0) Stream added, broadcasting: 5 I0511 17:58:17.281102 7 log.go:172] (0xc001176c60) Reply frame received for 5 I0511 17:58:17.341390 7 log.go:172] (0xc001176c60) Data frame received for 3 I0511 17:58:17.341414 7 log.go:172] (0xc0023c2460) (3) Data frame handling I0511 17:58:17.341432 7 log.go:172] (0xc0023c2460) (3) Data frame sent I0511 17:58:17.342199 7 log.go:172] (0xc001176c60) Data frame received for 5 I0511 17:58:17.342217 7 log.go:172] (0xc002d5d5e0) (5) Data frame handling I0511 17:58:17.342509 7 log.go:172] (0xc001176c60) Data frame received for 3 I0511 17:58:17.342532 7 log.go:172] (0xc0023c2460) (3) Data frame handling I0511 17:58:17.344243 7 log.go:172] (0xc001176c60) Data frame received for 1 I0511 17:58:17.344263 7 log.go:172] (0xc0023c23c0) (1) Data frame handling I0511 17:58:17.344278 7 log.go:172] (0xc0023c23c0) (1) Data frame sent I0511 17:58:17.344301 7 log.go:172] (0xc001176c60) (0xc0023c23c0) Stream removed, broadcasting: 1 I0511 17:58:17.344347 7 log.go:172] (0xc001176c60) Go away received I0511 17:58:17.344468 7 log.go:172] (0xc001176c60) (0xc0023c23c0) Stream removed, broadcasting: 1 I0511 17:58:17.344488 7 log.go:172] (0xc001176c60) (0xc0023c2460) Stream removed, broadcasting: 3 I0511 17:58:17.344498 7 log.go:172] (0xc001176c60) (0xc002d5d5e0) Stream removed, broadcasting: 5 May 11 17:58:17.344: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:58:17.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-844" for this suite. May 11 17:58:43.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:58:43.562: INFO: namespace pod-network-test-844 deletion completed in 26.214386512s • [SLOW TEST:53.487 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:58:43.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 17:58:44.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3289' May 11 17:58:52.509: INFO: stderr: "" May 11 17:58:52.509: INFO: stdout: "replicationcontroller/redis-master created\n" May 11 17:58:52.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3289' May 11 17:58:52.874: INFO: stderr: "" May 11 17:58:52.874: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 11 17:58:53.878: INFO: Selector matched 1 pods for map[app:redis] May 11 17:58:53.878: INFO: Found 0 / 1 May 11 17:58:54.999: INFO: Selector matched 1 pods for map[app:redis] May 11 17:58:54.999: INFO: Found 0 / 1 May 11 17:58:55.879: INFO: Selector matched 1 pods for map[app:redis] May 11 17:58:55.879: INFO: Found 0 / 1 May 11 17:58:56.934: INFO: Selector matched 1 pods for map[app:redis] May 11 17:58:56.934: INFO: Found 0 / 1 May 11 17:58:57.879: INFO: Selector matched 1 pods for map[app:redis] May 11 17:58:57.879: INFO: Found 0 / 1 May 11 17:58:59.018: INFO: Selector matched 1 pods for map[app:redis] May 11 17:58:59.018: INFO: Found 1 / 1 May 11 17:58:59.018: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 17:58:59.054: INFO: Selector matched 1 pods for map[app:redis] May 11 17:58:59.054: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 17:58:59.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-vcm8p --namespace=kubectl-3289' May 11 17:58:59.328: INFO: stderr: "" May 11 17:58:59.328: INFO: stdout: "Name: redis-master-vcm8p\nNamespace: kubectl-3289\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Mon, 11 May 2020 17:58:52 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.111\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://bebf213e462591478062c179ef1c7a26b7d077ebec492f069db2efc35f4bf338\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 11 May 2020 17:58:57 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-tntqg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-tntqg:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-tntqg\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned kubectl-3289/redis-master-vcm8p to iruya-worker\n Normal Pulled 5s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, iruya-worker Created container redis-master\n Normal Started 2s kubelet, iruya-worker Started container redis-master\n" May 11 17:58:59.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-3289' May 11 17:58:59.504: INFO: stderr: "" May 11 17:58:59.505: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3289\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: redis-master-vcm8p\n" May 11 17:58:59.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-3289' May 11 17:58:59.612: INFO: stderr: "" May 11 17:58:59.612: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3289\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.93.185\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.111:6379\nSession Affinity: None\nEvents: \n" May 11 17:58:59.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 11 17:58:59.737: INFO: stderr: "" May 11 17:58:59.737: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 11 May 2020 17:58:36 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 11 May 2020 17:58:36 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 11 May 2020 17:58:36 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 11 May 2020 17:58:36 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 56d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 11 17:58:59.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3289' May 11 17:58:59.840: INFO: stderr: "" May 11 17:58:59.840: INFO: stdout: "Name: kubectl-3289\nLabels: e2e-framework=kubectl\n e2e-run=989177ec-acd0-4485-b124-7c64419c8a75\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:58:59.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3289" for this suite. May 11 17:59:24.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:59:24.653: INFO: namespace kubectl-3289 deletion completed in 24.809816617s • [SLOW TEST:41.091 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:59:24.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-864386a3-0844-4b5f-8e0e-aaa139e3942a STEP: Creating a pod to test consume configMaps May 11 17:59:25.202: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d5e33b6-5837-4f04-b2b8-422871394657" in namespace "configmap-5747" to be "success or failure" May 11 17:59:25.265: INFO: Pod "pod-configmaps-4d5e33b6-5837-4f04-b2b8-422871394657": Phase="Pending", Reason="", readiness=false. Elapsed: 62.646486ms May 11 17:59:27.269: INFO: Pod "pod-configmaps-4d5e33b6-5837-4f04-b2b8-422871394657": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067390975s May 11 17:59:29.473: INFO: Pod "pod-configmaps-4d5e33b6-5837-4f04-b2b8-422871394657": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270508533s May 11 17:59:31.476: INFO: Pod "pod-configmaps-4d5e33b6-5837-4f04-b2b8-422871394657": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.273669136s STEP: Saw pod success May 11 17:59:31.476: INFO: Pod "pod-configmaps-4d5e33b6-5837-4f04-b2b8-422871394657" satisfied condition "success or failure" May 11 17:59:31.478: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-4d5e33b6-5837-4f04-b2b8-422871394657 container configmap-volume-test: STEP: delete the pod May 11 17:59:31.677: INFO: Waiting for pod pod-configmaps-4d5e33b6-5837-4f04-b2b8-422871394657 to disappear May 11 17:59:31.730: INFO: Pod pod-configmaps-4d5e33b6-5837-4f04-b2b8-422871394657 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:59:31.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5747" for this suite. May 11 17:59:39.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:59:39.975: INFO: namespace configmap-5747 deletion completed in 8.24030677s • [SLOW TEST:15.321 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:59:39.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 17:59:40.206: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:59:41.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9199" for this suite. May 11 17:59:47.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:59:47.560: INFO: namespace custom-resource-definition-9199 deletion completed in 6.075685006s • [SLOW TEST:7.585 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 17:59:47.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 17:59:48.657: INFO: Waiting up to 5m0s for pod "pod-43e3daf4-49b3-484c-9602-52d9b821481d" in namespace "emptydir-2415" to be "success or failure" May 11 17:59:49.066: INFO: Pod "pod-43e3daf4-49b3-484c-9602-52d9b821481d": Phase="Pending", Reason="", readiness=false. Elapsed: 409.304595ms May 11 17:59:51.317: INFO: Pod "pod-43e3daf4-49b3-484c-9602-52d9b821481d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.660280994s May 11 17:59:53.321: INFO: Pod "pod-43e3daf4-49b3-484c-9602-52d9b821481d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663634247s May 11 17:59:55.390: INFO: Pod "pod-43e3daf4-49b3-484c-9602-52d9b821481d": Phase="Running", Reason="", readiness=true. Elapsed: 6.733400497s May 11 17:59:57.427: INFO: Pod "pod-43e3daf4-49b3-484c-9602-52d9b821481d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.770172227s STEP: Saw pod success May 11 17:59:57.427: INFO: Pod "pod-43e3daf4-49b3-484c-9602-52d9b821481d" satisfied condition "success or failure" May 11 17:59:57.565: INFO: Trying to get logs from node iruya-worker pod pod-43e3daf4-49b3-484c-9602-52d9b821481d container test-container: STEP: delete the pod May 11 17:59:57.627: INFO: Waiting for pod pod-43e3daf4-49b3-484c-9602-52d9b821481d to disappear May 11 17:59:57.844: INFO: Pod pod-43e3daf4-49b3-484c-9602-52d9b821481d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 17:59:57.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2415" for this suite. May 11 18:00:04.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:00:04.257: INFO: namespace emptydir-2415 deletion completed in 6.407952389s • [SLOW TEST:16.696 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:00:04.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 18:00:04.422: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97ad2103-3ad7-4ad2-89fc-5d59354fa4c3" in namespace "projected-2341" to be "success or failure" May 11 18:00:04.451: INFO: Pod "downwardapi-volume-97ad2103-3ad7-4ad2-89fc-5d59354fa4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.305623ms May 11 18:00:06.455: INFO: Pod "downwardapi-volume-97ad2103-3ad7-4ad2-89fc-5d59354fa4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033020745s May 11 18:00:08.458: INFO: Pod "downwardapi-volume-97ad2103-3ad7-4ad2-89fc-5d59354fa4c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035477414s STEP: Saw pod success May 11 18:00:08.458: INFO: Pod "downwardapi-volume-97ad2103-3ad7-4ad2-89fc-5d59354fa4c3" satisfied condition "success or failure" May 11 18:00:08.460: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-97ad2103-3ad7-4ad2-89fc-5d59354fa4c3 container client-container: STEP: delete the pod May 11 18:00:08.505: INFO: Waiting for pod downwardapi-volume-97ad2103-3ad7-4ad2-89fc-5d59354fa4c3 to disappear May 11 18:00:08.545: INFO: Pod downwardapi-volume-97ad2103-3ad7-4ad2-89fc-5d59354fa4c3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:00:08.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2341" for this suite. May 11 18:00:14.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:00:14.617: INFO: namespace projected-2341 deletion completed in 6.068682932s • [SLOW TEST:10.360 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:00:14.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:00:46.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3010" for this suite. May 11 18:00:52.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:00:52.937: INFO: namespace namespaces-3010 deletion completed in 6.643054482s STEP: Destroying namespace "nsdeletetest-3803" for this suite. May 11 18:00:52.939: INFO: Namespace nsdeletetest-3803 was already deleted STEP: Destroying namespace "nsdeletetest-108" for this suite. May 11 18:00:59.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:00:59.197: INFO: namespace nsdeletetest-108 deletion completed in 6.257986374s • [SLOW TEST:44.579 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:00:59.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3213 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 11 18:01:00.324: INFO: Found 0 stateful pods, waiting for 3 May 11 18:01:10.385: INFO: Found 2 stateful pods, waiting for 3 May 11 18:01:20.328: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 18:01:20.328: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 18:01:20.328: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 11 18:01:20.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3213 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:01:20.642: INFO: stderr: "I0511 18:01:20.459158 1208 log.go:172] (0xc000a1c420) (0xc000a0e780) Create stream\nI0511 18:01:20.459197 1208 log.go:172] (0xc000a1c420) (0xc000a0e780) Stream added, broadcasting: 1\nI0511 18:01:20.460908 1208 log.go:172] (0xc000a1c420) Reply frame received for 1\nI0511 18:01:20.460931 1208 log.go:172] (0xc000a1c420) (0xc000011b80) Create stream\nI0511 18:01:20.460942 1208 log.go:172] (0xc000a1c420) (0xc000011b80) Stream added, broadcasting: 3\nI0511 18:01:20.461832 1208 log.go:172] (0xc000a1c420) Reply frame received for 3\nI0511 18:01:20.461863 1208 log.go:172] (0xc000a1c420) (0xc000936000) Create stream\nI0511 18:01:20.461909 1208 log.go:172] (0xc000a1c420) (0xc000936000) Stream added, broadcasting: 5\nI0511 18:01:20.462845 1208 log.go:172] (0xc000a1c420) Reply frame received for 5\nI0511 18:01:20.530819 1208 log.go:172] (0xc000a1c420) Data frame received for 5\nI0511 18:01:20.530844 1208 log.go:172] (0xc000936000) (5) Data frame handling\nI0511 18:01:20.530861 1208 log.go:172] (0xc000936000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 18:01:20.635692 1208 log.go:172] (0xc000a1c420) Data frame received for 5\nI0511 18:01:20.635736 1208 log.go:172] (0xc000a1c420) Data frame received for 3\nI0511 18:01:20.635773 1208 log.go:172] (0xc000011b80) (3) Data frame handling\nI0511 18:01:20.635789 1208 log.go:172] (0xc000011b80) (3) Data frame sent\nI0511 18:01:20.635801 1208 log.go:172] (0xc000a1c420) Data frame received for 3\nI0511 18:01:20.635811 1208 log.go:172] (0xc000011b80) (3) Data frame handling\nI0511 18:01:20.635861 1208 log.go:172] (0xc000936000) (5) Data frame handling\nI0511 18:01:20.637790 1208 log.go:172] (0xc000a1c420) Data frame received for 1\nI0511 18:01:20.637804 1208 log.go:172] (0xc000a0e780) (1) Data frame handling\nI0511 18:01:20.637816 1208 log.go:172] (0xc000a0e780) (1) Data frame sent\nI0511 18:01:20.637823 1208 log.go:172] (0xc000a1c420) (0xc000a0e780) Stream removed, broadcasting: 1\nI0511 18:01:20.637935 1208 log.go:172] (0xc000a1c420) Go away received\nI0511 18:01:20.638026 1208 log.go:172] (0xc000a1c420) (0xc000a0e780) Stream removed, broadcasting: 1\nI0511 18:01:20.638046 1208 log.go:172] (0xc000a1c420) (0xc000011b80) Stream removed, broadcasting: 3\nI0511 18:01:20.638056 1208 log.go:172] (0xc000a1c420) (0xc000936000) Stream removed, broadcasting: 5\n" May 11 18:01:20.643: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:01:20.643: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 11 18:01:30.672: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 11 18:01:40.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3213 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:01:41.027: INFO: stderr: "I0511 18:01:40.938316 1228 log.go:172] (0xc0009c6630) (0xc000610820) Create stream\nI0511 18:01:40.938374 1228 log.go:172] (0xc0009c6630) (0xc000610820) Stream added, broadcasting: 1\nI0511 18:01:40.941827 1228 log.go:172] (0xc0009c6630) Reply frame received for 1\nI0511 18:01:40.941872 1228 log.go:172] (0xc0009c6630) (0xc000610000) Create stream\nI0511 18:01:40.941896 1228 log.go:172] (0xc0009c6630) (0xc000610000) Stream added, broadcasting: 3\nI0511 18:01:40.942747 1228 log.go:172] (0xc0009c6630) Reply frame received for 3\nI0511 18:01:40.942766 1228 log.go:172] (0xc0009c6630) (0xc0006100a0) Create stream\nI0511 18:01:40.942771 1228 log.go:172] (0xc0009c6630) (0xc0006100a0) Stream added, broadcasting: 5\nI0511 18:01:40.943634 1228 log.go:172] (0xc0009c6630) Reply frame received for 5\nI0511 18:01:41.020653 1228 log.go:172] (0xc0009c6630) Data frame received for 3\nI0511 18:01:41.020688 1228 log.go:172] (0xc000610000) (3) Data frame handling\nI0511 18:01:41.020713 1228 log.go:172] (0xc000610000) (3) Data frame sent\nI0511 18:01:41.020726 1228 log.go:172] (0xc0009c6630) Data frame received for 3\nI0511 18:01:41.020738 1228 log.go:172] (0xc000610000) (3) Data frame handling\nI0511 18:01:41.020783 1228 log.go:172] (0xc0009c6630) Data frame received for 5\nI0511 18:01:41.020798 1228 log.go:172] (0xc0006100a0) (5) Data frame handling\nI0511 18:01:41.020812 1228 log.go:172] (0xc0006100a0) (5) Data frame sent\nI0511 18:01:41.020827 1228 log.go:172] (0xc0009c6630) Data frame received for 5\nI0511 18:01:41.020838 1228 log.go:172] (0xc0006100a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 18:01:41.022292 1228 log.go:172] (0xc0009c6630) Data frame received for 1\nI0511 18:01:41.022319 1228 log.go:172] (0xc000610820) (1) Data frame handling\nI0511 18:01:41.022360 1228 log.go:172] (0xc000610820) (1) Data frame sent\nI0511 18:01:41.022379 1228 log.go:172] (0xc0009c6630) (0xc000610820) Stream removed, broadcasting: 1\nI0511 18:01:41.022397 1228 log.go:172] (0xc0009c6630) Go away received\nI0511 18:01:41.022779 1228 log.go:172] (0xc0009c6630) (0xc000610820) Stream removed, broadcasting: 1\nI0511 18:01:41.022804 1228 log.go:172] (0xc0009c6630) (0xc000610000) Stream removed, broadcasting: 3\nI0511 18:01:41.022822 1228 log.go:172] (0xc0009c6630) (0xc0006100a0) Stream removed, broadcasting: 5\n" May 11 18:01:41.027: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 18:01:41.027: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 18:01:51.058: INFO: Waiting for StatefulSet statefulset-3213/ss2 to complete update May 11 18:01:51.058: INFO: Waiting for Pod statefulset-3213/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 18:01:51.058: INFO: Waiting for Pod statefulset-3213/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 18:01:51.058: INFO: Waiting for Pod statefulset-3213/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 18:02:01.066: INFO: Waiting for StatefulSet statefulset-3213/ss2 to complete update May 11 18:02:01.066: INFO: Waiting for Pod statefulset-3213/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 18:02:01.066: INFO: Waiting for Pod statefulset-3213/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 18:02:11.063: INFO: Waiting for StatefulSet statefulset-3213/ss2 to complete update May 11 18:02:11.064: INFO: Waiting for Pod statefulset-3213/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 11 18:02:21.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3213 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:02:21.338: INFO: stderr: "I0511 18:02:21.208649 1250 log.go:172] (0xc00088c630) (0xc00085a820) Create stream\nI0511 18:02:21.208720 1250 log.go:172] (0xc00088c630) (0xc00085a820) Stream added, broadcasting: 1\nI0511 18:02:21.211519 1250 log.go:172] (0xc00088c630) Reply frame received for 1\nI0511 18:02:21.211547 1250 log.go:172] (0xc00088c630) (0xc0002efa40) Create stream\nI0511 18:02:21.211554 1250 log.go:172] (0xc00088c630) (0xc0002efa40) Stream added, broadcasting: 3\nI0511 18:02:21.212333 1250 log.go:172] (0xc00088c630) Reply frame received for 3\nI0511 18:02:21.212386 1250 log.go:172] (0xc00088c630) (0xc00090e000) Create stream\nI0511 18:02:21.212409 1250 log.go:172] (0xc00088c630) (0xc00090e000) Stream added, broadcasting: 5\nI0511 18:02:21.213349 1250 log.go:172] (0xc00088c630) Reply frame received for 5\nI0511 18:02:21.303398 1250 log.go:172] (0xc00088c630) Data frame received for 5\nI0511 18:02:21.303426 1250 log.go:172] (0xc00090e000) (5) Data frame handling\nI0511 18:02:21.303442 1250 log.go:172] (0xc00090e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 18:02:21.331435 1250 log.go:172] (0xc00088c630) Data frame received for 3\nI0511 18:02:21.331453 1250 log.go:172] (0xc0002efa40) (3) Data frame handling\nI0511 18:02:21.331462 1250 log.go:172] (0xc0002efa40) (3) Data frame sent\nI0511 18:02:21.331467 1250 log.go:172] (0xc00088c630) Data frame received for 3\nI0511 18:02:21.331473 1250 log.go:172] (0xc0002efa40) (3) Data frame handling\nI0511 18:02:21.332118 1250 log.go:172] (0xc00088c630) Data frame received for 5\nI0511 18:02:21.332146 1250 log.go:172] (0xc00090e000) (5) Data frame handling\nI0511 18:02:21.333587 1250 log.go:172] (0xc00088c630) Data frame received for 1\nI0511 18:02:21.333615 1250 log.go:172] (0xc00085a820) (1) Data frame handling\nI0511 18:02:21.333631 1250 log.go:172] (0xc00085a820) (1) Data frame sent\nI0511 18:02:21.333645 1250 log.go:172] (0xc00088c630) (0xc00085a820) Stream removed, broadcasting: 1\nI0511 18:02:21.333685 1250 log.go:172] (0xc00088c630) Go away received\nI0511 18:02:21.333970 1250 log.go:172] (0xc00088c630) (0xc00085a820) Stream removed, broadcasting: 1\nI0511 18:02:21.333987 1250 log.go:172] (0xc00088c630) (0xc0002efa40) Stream removed, broadcasting: 3\nI0511 18:02:21.333996 1250 log.go:172] (0xc00088c630) (0xc00090e000) Stream removed, broadcasting: 5\n" May 11 18:02:21.338: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:02:21.338: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 18:02:31.376: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 11 18:02:41.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3213 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:02:41.679: INFO: stderr: "I0511 18:02:41.600824 1268 log.go:172] (0xc000130dc0) (0xc000a12640) Create stream\nI0511 18:02:41.600870 1268 log.go:172] (0xc000130dc0) (0xc000a12640) Stream added, broadcasting: 1\nI0511 18:02:41.602582 1268 log.go:172] (0xc000130dc0) Reply frame received for 1\nI0511 18:02:41.602612 1268 log.go:172] (0xc000130dc0) (0xc000944000) Create stream\nI0511 18:02:41.602622 1268 log.go:172] (0xc000130dc0) (0xc000944000) Stream added, broadcasting: 3\nI0511 18:02:41.603333 1268 log.go:172] (0xc000130dc0) Reply frame received for 3\nI0511 18:02:41.603351 1268 log.go:172] (0xc000130dc0) (0xc000a126e0) Create stream\nI0511 18:02:41.603356 1268 log.go:172] (0xc000130dc0) (0xc000a126e0) Stream added, broadcasting: 5\nI0511 18:02:41.603976 1268 log.go:172] (0xc000130dc0) Reply frame received for 5\nI0511 18:02:41.673883 1268 log.go:172] (0xc000130dc0) Data frame received for 3\nI0511 18:02:41.673950 1268 log.go:172] (0xc000944000) (3) Data frame handling\nI0511 18:02:41.673974 1268 log.go:172] (0xc000944000) (3) Data frame sent\nI0511 18:02:41.673991 1268 log.go:172] (0xc000130dc0) Data frame received for 3\nI0511 18:02:41.674004 1268 log.go:172] (0xc000944000) (3) Data frame handling\nI0511 18:02:41.674056 1268 log.go:172] (0xc000130dc0) Data frame received for 5\nI0511 18:02:41.674087 1268 log.go:172] (0xc000a126e0) (5) Data frame handling\nI0511 18:02:41.674102 1268 log.go:172] (0xc000a126e0) (5) Data frame sent\nI0511 18:02:41.674112 1268 log.go:172] (0xc000130dc0) Data frame received for 5\nI0511 18:02:41.674119 1268 log.go:172] (0xc000a126e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 18:02:41.675137 1268 log.go:172] (0xc000130dc0) Data frame received for 1\nI0511 18:02:41.675151 1268 log.go:172] (0xc000a12640) (1) Data frame handling\nI0511 18:02:41.675168 1268 log.go:172] (0xc000a12640) (1) Data frame sent\nI0511 18:02:41.675188 1268 log.go:172] (0xc000130dc0) (0xc000a12640) Stream removed, broadcasting: 1\nI0511 18:02:41.675206 1268 log.go:172] (0xc000130dc0) Go away received\nI0511 18:02:41.675699 1268 log.go:172] (0xc000130dc0) (0xc000a12640) Stream removed, broadcasting: 1\nI0511 18:02:41.675720 1268 log.go:172] (0xc000130dc0) (0xc000944000) Stream removed, broadcasting: 3\nI0511 18:02:41.675731 1268 log.go:172] (0xc000130dc0) (0xc000a126e0) Stream removed, broadcasting: 5\n" May 11 18:02:41.679: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 18:02:41.679: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 18:03:02.024: INFO: Waiting for StatefulSet statefulset-3213/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 11 18:03:12.072: INFO: Deleting all statefulset in ns statefulset-3213 May 11 18:03:12.074: INFO: Scaling statefulset ss2 to 0 May 11 18:03:52.225: INFO: Waiting for statefulset status.replicas updated to 0 May 11 18:03:52.228: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:03:52.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3213" for this suite. May 11 18:04:00.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:04:00.517: INFO: namespace statefulset-3213 deletion completed in 8.222943225s • [SLOW TEST:181.320 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:04:00.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-634bc0eb-7af9-499b-b1d0-e641f45962be STEP: Creating a pod to test consume secrets May 11 18:04:01.305: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1f858ba3-c1d7-4b9f-9013-ac446d29b031" in namespace "projected-4275" to be "success or failure" May 11 18:04:01.367: INFO: Pod "pod-projected-secrets-1f858ba3-c1d7-4b9f-9013-ac446d29b031": Phase="Pending", Reason="", readiness=false. Elapsed: 61.388925ms May 11 18:04:03.371: INFO: Pod "pod-projected-secrets-1f858ba3-c1d7-4b9f-9013-ac446d29b031": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065325906s May 11 18:04:05.374: INFO: Pod "pod-projected-secrets-1f858ba3-c1d7-4b9f-9013-ac446d29b031": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069109727s May 11 18:04:07.378: INFO: Pod "pod-projected-secrets-1f858ba3-c1d7-4b9f-9013-ac446d29b031": Phase="Running", Reason="", readiness=true. Elapsed: 6.073125507s May 11 18:04:09.530: INFO: Pod "pod-projected-secrets-1f858ba3-c1d7-4b9f-9013-ac446d29b031": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.224918614s STEP: Saw pod success May 11 18:04:09.530: INFO: Pod "pod-projected-secrets-1f858ba3-c1d7-4b9f-9013-ac446d29b031" satisfied condition "success or failure" May 11 18:04:09.532: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-1f858ba3-c1d7-4b9f-9013-ac446d29b031 container projected-secret-volume-test: STEP: delete the pod May 11 18:04:09.723: INFO: Waiting for pod pod-projected-secrets-1f858ba3-c1d7-4b9f-9013-ac446d29b031 to disappear May 11 18:04:09.978: INFO: Pod pod-projected-secrets-1f858ba3-c1d7-4b9f-9013-ac446d29b031 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:04:09.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4275" for this suite. May 11 18:04:18.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:04:18.310: INFO: namespace projected-4275 deletion completed in 8.329602624s • [SLOW TEST:17.793 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:04:18.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 18:04:19.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1370' May 11 18:04:19.190: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 18:04:19.190: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 11 18:04:23.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1370' May 11 18:04:24.397: INFO: stderr: "" May 11 18:04:24.397: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:04:24.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1370" for this suite. May 11 18:04:48.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:04:49.004: INFO: namespace kubectl-1370 deletion completed in 24.563595741s • [SLOW TEST:30.693 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:04:49.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 18:04:50.567: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c3ea275-3a75-4c7f-9eae-266f9a5ab587" in namespace "downward-api-8322" to be "success or failure" May 11 18:04:50.607: INFO: Pod "downwardapi-volume-5c3ea275-3a75-4c7f-9eae-266f9a5ab587": Phase="Pending", Reason="", readiness=false. Elapsed: 40.232648ms May 11 18:04:52.922: INFO: Pod "downwardapi-volume-5c3ea275-3a75-4c7f-9eae-266f9a5ab587": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354934173s May 11 18:04:54.925: INFO: Pod "downwardapi-volume-5c3ea275-3a75-4c7f-9eae-266f9a5ab587": Phase="Pending", Reason="", readiness=false. Elapsed: 4.358608451s May 11 18:04:56.939: INFO: Pod "downwardapi-volume-5c3ea275-3a75-4c7f-9eae-266f9a5ab587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.371990533s STEP: Saw pod success May 11 18:04:56.939: INFO: Pod "downwardapi-volume-5c3ea275-3a75-4c7f-9eae-266f9a5ab587" satisfied condition "success or failure" May 11 18:04:56.941: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5c3ea275-3a75-4c7f-9eae-266f9a5ab587 container client-container: STEP: delete the pod May 11 18:04:56.978: INFO: Waiting for pod downwardapi-volume-5c3ea275-3a75-4c7f-9eae-266f9a5ab587 to disappear May 11 18:04:57.027: INFO: Pod downwardapi-volume-5c3ea275-3a75-4c7f-9eae-266f9a5ab587 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:04:57.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8322" for this suite. May 11 18:05:05.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:05:05.552: INFO: namespace downward-api-8322 deletion completed in 8.52238217s • [SLOW TEST:16.549 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:05:05.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 18:05:05.742: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78c91ea3-2eb6-4819-90a6-6f0a5147eda6" in namespace "downward-api-5750" to be "success or failure" May 11 18:05:05.776: INFO: Pod "downwardapi-volume-78c91ea3-2eb6-4819-90a6-6f0a5147eda6": Phase="Pending", Reason="", readiness=false. Elapsed: 34.55782ms May 11 18:05:08.179: INFO: Pod "downwardapi-volume-78c91ea3-2eb6-4819-90a6-6f0a5147eda6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.436661483s May 11 18:05:10.182: INFO: Pod "downwardapi-volume-78c91ea3-2eb6-4819-90a6-6f0a5147eda6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.440143088s STEP: Saw pod success May 11 18:05:10.182: INFO: Pod "downwardapi-volume-78c91ea3-2eb6-4819-90a6-6f0a5147eda6" satisfied condition "success or failure" May 11 18:05:10.214: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-78c91ea3-2eb6-4819-90a6-6f0a5147eda6 container client-container: STEP: delete the pod May 11 18:05:10.255: INFO: Waiting for pod downwardapi-volume-78c91ea3-2eb6-4819-90a6-6f0a5147eda6 to disappear May 11 18:05:10.266: INFO: Pod downwardapi-volume-78c91ea3-2eb6-4819-90a6-6f0a5147eda6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:05:10.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5750" for this suite. May 11 18:05:18.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:05:18.358: INFO: namespace downward-api-5750 deletion completed in 8.090306027s • [SLOW TEST:12.805 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:05:18.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 18:05:18.543: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 11 18:05:18.686: INFO: Pod name sample-pod: Found 0 pods out of 1 May 11 18:05:23.699: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 18:05:25.705: INFO: Creating deployment "test-rolling-update-deployment" May 11 18:05:25.708: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 11 18:05:25.728: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 11 18:05:27.736: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 11 18:05:27.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817125, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817125, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817126, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817125, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:05:29.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817125, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817125, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817126, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817125, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:05:31.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817125, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817125, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817126, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817125, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:05:33.945: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 11 18:05:34.258: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7421,SelfLink:/apis/apps/v1/namespaces/deployment-7421/deployments/test-rolling-update-deployment,UID:7c8ba6ab-e081-4cd7-9387-0c9b995fdf72,ResourceVersion:10294898,Generation:1,CreationTimestamp:2020-05-11 18:05:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-11 18:05:25 +0000 UTC 2020-05-11 18:05:25 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-11 18:05:32 +0000 UTC 2020-05-11 18:05:25 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 11 18:05:34.261: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-7421,SelfLink:/apis/apps/v1/namespaces/deployment-7421/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:431a1b1b-74b7-4b6c-aa70-b064254f9038,ResourceVersion:10294886,Generation:1,CreationTimestamp:2020-05-11 18:05:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7c8ba6ab-e081-4cd7-9387-0c9b995fdf72 0xc000c40397 0xc000c40398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 11 18:05:34.261: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 11 18:05:34.261: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7421,SelfLink:/apis/apps/v1/namespaces/deployment-7421/replicasets/test-rolling-update-controller,UID:ad0a629d-3820-45df-a1d1-c3595f608fd7,ResourceVersion:10294896,Generation:2,CreationTimestamp:2020-05-11 18:05:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7c8ba6ab-e081-4cd7-9387-0c9b995fdf72 0xc000c402b7 0xc000c402b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 18:05:34.263: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-tcfw5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-tcfw5,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-7421,SelfLink:/api/v1/namespaces/deployment-7421/pods/test-rolling-update-deployment-79f6b9d75c-tcfw5,UID:d3d25194-331d-41aa-ba26-5229911051de,ResourceVersion:10294885,Generation:0,CreationTimestamp:2020-05-11 18:05:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 431a1b1b-74b7-4b6c-aa70-b064254f9038 0xc0012d1587 0xc0012d1588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bnsx9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bnsx9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-bnsx9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012d1740} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012d1760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:05:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:05:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:05:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:05:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.180,StartTime:2020-05-11 18:05:25 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-11 18:05:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://1dd83b95ef3e12312d7b5536af6d25b1a8358496a04c48107e8116bd22fbe50f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:05:34.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7421" for this suite. May 11 18:05:42.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:05:42.425: INFO: namespace deployment-7421 deletion completed in 8.159096656s • [SLOW TEST:24.066 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:05:42.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 18:05:52.648: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:05:52.796: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:05:54.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:05:54.800: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:05:56.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:05:57.194: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:05:58.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:05:58.800: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:00.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:00.801: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:02.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:02.800: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:04.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:04.800: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:06.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:06.800: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:08.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:08.801: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:10.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:10.800: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:12.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:13.995: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:14.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:14.800: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:16.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:16.801: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:18.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:18.800: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:20.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:20.802: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:06:22.796: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:06:22.800: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:06:22.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8939" for this suite. May 11 18:06:46.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:06:46.914: INFO: namespace container-lifecycle-hook-8939 deletion completed in 24.105755469s • [SLOW TEST:64.489 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:06:46.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 11 18:06:57.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-68831df0-0206-43e2-af5d-4f94e63e7bfc -c busybox-main-container --namespace=emptydir-1376 -- cat /usr/share/volumeshare/shareddata.txt' May 11 18:06:57.566: INFO: stderr: "I0511 18:06:57.501649 1329 log.go:172] (0xc000a2a420) (0xc000612960) Create stream\nI0511 18:06:57.501735 1329 log.go:172] (0xc000a2a420) (0xc000612960) Stream added, broadcasting: 1\nI0511 18:06:57.504850 1329 log.go:172] (0xc000a2a420) Reply frame received for 1\nI0511 18:06:57.504907 1329 log.go:172] (0xc000a2a420) (0xc00081a000) Create stream\nI0511 18:06:57.504923 1329 log.go:172] (0xc000a2a420) (0xc00081a000) Stream added, broadcasting: 3\nI0511 18:06:57.506083 1329 log.go:172] (0xc000a2a420) Reply frame received for 3\nI0511 18:06:57.506119 1329 log.go:172] (0xc000a2a420) (0xc000814000) Create stream\nI0511 18:06:57.506131 1329 log.go:172] (0xc000a2a420) (0xc000814000) Stream added, broadcasting: 5\nI0511 18:06:57.507080 1329 log.go:172] (0xc000a2a420) Reply frame received for 5\nI0511 18:06:57.561429 1329 log.go:172] (0xc000a2a420) Data frame received for 5\nI0511 18:06:57.561449 1329 log.go:172] (0xc000814000) (5) Data frame handling\nI0511 18:06:57.561462 1329 log.go:172] (0xc000a2a420) Data frame received for 3\nI0511 18:06:57.561468 1329 log.go:172] (0xc00081a000) (3) Data frame handling\nI0511 18:06:57.561482 1329 log.go:172] (0xc00081a000) (3) Data frame sent\nI0511 18:06:57.561490 1329 log.go:172] (0xc000a2a420) Data frame received for 3\nI0511 18:06:57.561497 1329 log.go:172] (0xc00081a000) (3) Data frame handling\nI0511 18:06:57.562740 1329 log.go:172] (0xc000a2a420) Data frame received for 1\nI0511 18:06:57.562755 1329 log.go:172] (0xc000612960) (1) Data frame handling\nI0511 18:06:57.562827 1329 log.go:172] (0xc000612960) (1) Data frame sent\nI0511 18:06:57.562845 1329 log.go:172] (0xc000a2a420) (0xc000612960) Stream removed, broadcasting: 1\nI0511 18:06:57.562867 1329 log.go:172] (0xc000a2a420) Go away received\nI0511 18:06:57.563142 1329 log.go:172] (0xc000a2a420) (0xc000612960) Stream removed, broadcasting: 1\nI0511 18:06:57.563159 1329 log.go:172] (0xc000a2a420) (0xc00081a000) Stream removed, broadcasting: 3\nI0511 18:06:57.563164 1329 log.go:172] (0xc000a2a420) (0xc000814000) Stream removed, broadcasting: 5\n" May 11 18:06:57.567: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:06:57.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1376" for this suite. May 11 18:07:03.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:07:03.647: INFO: namespace emptydir-1376 deletion completed in 6.073379851s • [SLOW TEST:16.732 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:07:03.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 11 18:07:03.807: INFO: Waiting up to 5m0s for pod "downward-api-992e7ab3-42a6-4990-8fec-38950eb6f425" in namespace "downward-api-7742" to be "success or failure" May 11 18:07:03.864: INFO: Pod "downward-api-992e7ab3-42a6-4990-8fec-38950eb6f425": Phase="Pending", Reason="", readiness=false. Elapsed: 56.914265ms May 11 18:07:05.868: INFO: Pod "downward-api-992e7ab3-42a6-4990-8fec-38950eb6f425": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060753213s May 11 18:07:07.872: INFO: Pod "downward-api-992e7ab3-42a6-4990-8fec-38950eb6f425": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065317625s May 11 18:07:09.876: INFO: Pod "downward-api-992e7ab3-42a6-4990-8fec-38950eb6f425": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068863024s STEP: Saw pod success May 11 18:07:09.876: INFO: Pod "downward-api-992e7ab3-42a6-4990-8fec-38950eb6f425" satisfied condition "success or failure" May 11 18:07:09.879: INFO: Trying to get logs from node iruya-worker2 pod downward-api-992e7ab3-42a6-4990-8fec-38950eb6f425 container dapi-container: STEP: delete the pod May 11 18:07:09.898: INFO: Waiting for pod downward-api-992e7ab3-42a6-4990-8fec-38950eb6f425 to disappear May 11 18:07:09.915: INFO: Pod downward-api-992e7ab3-42a6-4990-8fec-38950eb6f425 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:07:09.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7742" for this suite. May 11 18:07:20.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:07:20.413: INFO: namespace downward-api-7742 deletion completed in 10.494026348s • [SLOW TEST:16.766 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:07:20.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 18:07:21.167: INFO: Waiting up to 5m0s for pod "pod-955ef13e-45bb-49af-899c-c932ca1dd65d" in namespace "emptydir-4430" to be "success or failure" May 11 18:07:21.202: INFO: Pod "pod-955ef13e-45bb-49af-899c-c932ca1dd65d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.919173ms May 11 18:07:23.346: INFO: Pod "pod-955ef13e-45bb-49af-899c-c932ca1dd65d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179501604s May 11 18:07:25.492: INFO: Pod "pod-955ef13e-45bb-49af-899c-c932ca1dd65d": Phase="Running", Reason="", readiness=true. Elapsed: 4.324936918s May 11 18:07:27.495: INFO: Pod "pod-955ef13e-45bb-49af-899c-c932ca1dd65d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.328029196s STEP: Saw pod success May 11 18:07:27.495: INFO: Pod "pod-955ef13e-45bb-49af-899c-c932ca1dd65d" satisfied condition "success or failure" May 11 18:07:27.497: INFO: Trying to get logs from node iruya-worker pod pod-955ef13e-45bb-49af-899c-c932ca1dd65d container test-container: STEP: delete the pod May 11 18:07:27.568: INFO: Waiting for pod pod-955ef13e-45bb-49af-899c-c932ca1dd65d to disappear May 11 18:07:27.574: INFO: Pod pod-955ef13e-45bb-49af-899c-c932ca1dd65d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:07:27.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4430" for this suite. May 11 18:07:35.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:07:35.723: INFO: namespace emptydir-4430 deletion completed in 8.146307083s • [SLOW TEST:15.310 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:07:35.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 18:07:36.048: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 11 18:07:41.318: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 18:07:43.326: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 11 18:07:45.328: INFO: Creating deployment "test-rollover-deployment" May 11 18:07:45.498: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 11 18:07:47.513: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 11 18:07:47.559: INFO: Ensure that both replica sets have 1 created replica May 11 18:07:47.564: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 11 18:07:47.646: INFO: Updating deployment test-rollover-deployment May 11 18:07:47.646: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 11 18:07:49.832: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 11 18:07:49.994: INFO: Make sure deployment "test-rollover-deployment" is complete May 11 18:07:50.300: INFO: all replica sets need to contain the pod-template-hash label May 11 18:07:50.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817269, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:07:52.490: INFO: all replica sets need to contain the pod-template-hash label May 11 18:07:52.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817269, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:07:54.701: INFO: all replica sets need to contain the pod-template-hash label May 11 18:07:54.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817269, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:07:56.357: INFO: all replica sets need to contain the pod-template-hash label May 11 18:07:56.358: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817269, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:07:58.306: INFO: all replica sets need to contain the pod-template-hash label May 11 18:07:58.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817277, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:08:00.307: INFO: all replica sets need to contain the pod-template-hash label May 11 18:08:00.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817277, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:08:02.351: INFO: all replica sets need to contain the pod-template-hash label May 11 18:08:02.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817277, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:08:06.120: INFO: all replica sets need to contain the pod-template-hash label May 11 18:08:06.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817277, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:08:07.136: INFO: all replica sets need to contain the pod-template-hash label May 11 18:08:07.136: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817277, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:08:09.064: INFO: May 11 18:08:09.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817288, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724817265, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:08:10.703: INFO: May 11 18:08:10.703: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 11 18:08:10.710: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7739,SelfLink:/apis/apps/v1/namespaces/deployment-7739/deployments/test-rollover-deployment,UID:451a5967-1dd1-4c1c-89c0-86ce903ce0fb,ResourceVersion:10295407,Generation:2,CreationTimestamp:2020-05-11 18:07:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-11 18:07:45 +0000 UTC 2020-05-11 18:07:45 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-11 18:08:09 +0000 UTC 2020-05-11 18:07:45 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 11 18:08:10.712: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7739,SelfLink:/apis/apps/v1/namespaces/deployment-7739/replicasets/test-rollover-deployment-854595fc44,UID:72e33bfd-3fd3-4c3a-a6a7-31b33630be1a,ResourceVersion:10295394,Generation:2,CreationTimestamp:2020-05-11 18:07:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 451a5967-1dd1-4c1c-89c0-86ce903ce0fb 0xc001cc8c07 0xc001cc8c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 11 18:08:10.712: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 11 18:08:10.713: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7739,SelfLink:/apis/apps/v1/namespaces/deployment-7739/replicasets/test-rollover-controller,UID:384eea0f-bbf5-4bea-b84f-e0d225d4694f,ResourceVersion:10295406,Generation:2,CreationTimestamp:2020-05-11 18:07:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 451a5967-1dd1-4c1c-89c0-86ce903ce0fb 0xc001cc8b37 0xc001cc8b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 18:08:10.713: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7739,SelfLink:/apis/apps/v1/namespaces/deployment-7739/replicasets/test-rollover-deployment-9b8b997cf,UID:47954f06-5ebc-4108-8f6d-849bf645da1d,ResourceVersion:10295355,Generation:2,CreationTimestamp:2020-05-11 18:07:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 451a5967-1dd1-4c1c-89c0-86ce903ce0fb 0xc001cc8cd0 0xc001cc8cd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 18:08:10.715: INFO: Pod "test-rollover-deployment-854595fc44-qhrth" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-qhrth,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7739,SelfLink:/api/v1/namespaces/deployment-7739/pods/test-rollover-deployment-854595fc44-qhrth,UID:55b0a6ae-b963-4706-b320-4490dbbcc15d,ResourceVersion:10295374,Generation:0,CreationTimestamp:2020-05-11 18:07:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 72e33bfd-3fd3-4c3a-a6a7-31b33630be1a 0xc000abfc87 0xc000abfc88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-v2cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v2cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-v2cbw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000abfd00} {node.kubernetes.io/unreachable Exists NoExecute 0xc000abfd20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:07:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:07:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:07:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:07:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.184,StartTime:2020-05-11 18:07:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-11 18:07:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://d2048c132e8ab1d01a7c05d6706c4d03a6180f8a62c1b3850c3d748d1c2d53b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:08:10.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7739" for this suite. May 11 18:08:18.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:08:18.848: INFO: namespace deployment-7739 deletion completed in 8.129857639s • [SLOW TEST:43.125 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:08:18.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:08:27.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2386" for this suite. May 11 18:09:13.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:09:13.692: INFO: namespace kubelet-test-2386 deletion completed in 46.308910298s • [SLOW TEST:54.843 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:09:13.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 11 18:09:14.166: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 11 18:09:14.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5638' May 11 18:09:19.198: INFO: stderr: "" May 11 18:09:19.198: INFO: stdout: "service/redis-slave created\n" May 11 18:09:19.198: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 11 18:09:19.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5638' May 11 18:09:19.522: INFO: stderr: "" May 11 18:09:19.523: INFO: stdout: "service/redis-master created\n" May 11 18:09:19.523: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 11 18:09:19.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5638' May 11 18:09:19.864: INFO: stderr: "" May 11 18:09:19.864: INFO: stdout: "service/frontend created\n" May 11 18:09:19.864: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 11 18:09:19.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5638' May 11 18:09:21.078: INFO: stderr: "" May 11 18:09:21.078: INFO: stdout: "deployment.apps/frontend created\n" May 11 18:09:21.078: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 11 18:09:21.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5638' May 11 18:09:21.632: INFO: stderr: "" May 11 18:09:21.632: INFO: stdout: "deployment.apps/redis-master created\n" May 11 18:09:21.632: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 11 18:09:21.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5638' May 11 18:09:23.129: INFO: stderr: "" May 11 18:09:23.129: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 11 18:09:23.129: INFO: Waiting for all frontend pods to be Running. May 11 18:09:33.180: INFO: Waiting for frontend to serve content. May 11 18:09:34.251: INFO: Trying to add a new entry to the guestbook. May 11 18:09:34.609: INFO: Verifying that added entry can be retrieved. May 11 18:09:34.618: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources May 11 18:09:39.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5638' May 11 18:09:39.968: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 18:09:39.968: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 11 18:09:39.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5638' May 11 18:09:40.270: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 18:09:40.270: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 11 18:09:40.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5638' May 11 18:09:40.502: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 18:09:40.502: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 18:09:40.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5638' May 11 18:09:40.606: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 18:09:40.606: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 18:09:40.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5638' May 11 18:09:40.744: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 18:09:40.744: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 11 18:09:40.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5638' May 11 18:09:40.862: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 18:09:40.862: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:09:40.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5638" for this suite. May 11 18:10:24.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:10:25.028: INFO: namespace kubectl-5638 deletion completed in 44.121641604s • [SLOW TEST:71.336 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:10:25.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-afb8a995-bd34-41ba-8540-b4e43a8ac877 STEP: Creating a pod to test consume secrets May 11 18:10:25.243: INFO: Waiting up to 5m0s for pod "pod-secrets-acd789ae-512f-490a-8602-b904a211b0c7" in namespace "secrets-4478" to be "success or failure" May 11 18:10:25.278: INFO: Pod "pod-secrets-acd789ae-512f-490a-8602-b904a211b0c7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.638443ms May 11 18:10:27.282: INFO: Pod "pod-secrets-acd789ae-512f-490a-8602-b904a211b0c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038176678s May 11 18:10:29.315: INFO: Pod "pod-secrets-acd789ae-512f-490a-8602-b904a211b0c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071309211s May 11 18:10:31.318: INFO: Pod "pod-secrets-acd789ae-512f-490a-8602-b904a211b0c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074625227s STEP: Saw pod success May 11 18:10:31.318: INFO: Pod "pod-secrets-acd789ae-512f-490a-8602-b904a211b0c7" satisfied condition "success or failure" May 11 18:10:31.320: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-acd789ae-512f-490a-8602-b904a211b0c7 container secret-volume-test: STEP: delete the pod May 11 18:10:31.490: INFO: Waiting for pod pod-secrets-acd789ae-512f-490a-8602-b904a211b0c7 to disappear May 11 18:10:31.728: INFO: Pod pod-secrets-acd789ae-512f-490a-8602-b904a211b0c7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:10:31.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4478" for this suite. May 11 18:10:40.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:10:40.086: INFO: namespace secrets-4478 deletion completed in 8.354411445s • [SLOW TEST:15.058 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:10:40.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 18:10:45.665: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:10:46.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7064" for this suite. May 11 18:10:52.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:10:52.737: INFO: namespace container-runtime-7064 deletion completed in 6.333924418s • [SLOW TEST:12.651 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:10:52.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 18:11:03.164: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 18:11:03.183: INFO: Pod pod-with-poststart-http-hook still exists May 11 18:11:05.183: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 18:11:06.082: INFO: Pod pod-with-poststart-http-hook still exists May 11 18:11:07.183: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 18:11:07.187: INFO: Pod pod-with-poststart-http-hook still exists May 11 18:11:09.183: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 18:11:09.468: INFO: Pod pod-with-poststart-http-hook still exists May 11 18:11:11.184: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 18:11:11.530: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:11:11.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9700" for this suite. May 11 18:11:37.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:11:37.642: INFO: namespace container-lifecycle-hook-9700 deletion completed in 26.108790203s • [SLOW TEST:44.905 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:11:37.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-9500c742-41cc-432b-998e-14b54831c3c1 in namespace container-probe-4010 May 11 18:11:43.950: INFO: Started pod test-webserver-9500c742-41cc-432b-998e-14b54831c3c1 in namespace container-probe-4010 STEP: checking the pod's current state and verifying that restartCount is present May 11 18:11:43.952: INFO: Initial restart count of pod test-webserver-9500c742-41cc-432b-998e-14b54831c3c1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:15:45.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4010" for this suite. May 11 18:15:52.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:15:52.745: INFO: namespace container-probe-4010 deletion completed in 7.177569959s • [SLOW TEST:255.102 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:15:52.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 11 18:15:59.130: INFO: Pod name wrapped-volume-race-a71741e6-f32c-41db-88d0-8b79612ef7be: Found 0 pods out of 5 May 11 18:16:04.494: INFO: Pod name wrapped-volume-race-a71741e6-f32c-41db-88d0-8b79612ef7be: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a71741e6-f32c-41db-88d0-8b79612ef7be in namespace emptydir-wrapper-5193, will wait for the garbage collector to delete the pods May 11 18:16:26.330: INFO: Deleting ReplicationController wrapped-volume-race-a71741e6-f32c-41db-88d0-8b79612ef7be took: 873.045774ms May 11 18:16:27.830: INFO: Terminating ReplicationController wrapped-volume-race-a71741e6-f32c-41db-88d0-8b79612ef7be pods took: 1.500237826s STEP: Creating RC which spawns configmap-volume pods May 11 18:17:12.298: INFO: Pod name wrapped-volume-race-2929bf7f-7ab5-4b91-b1b7-eec30f51ea1e: Found 0 pods out of 5 May 11 18:17:17.332: INFO: Pod name wrapped-volume-race-2929bf7f-7ab5-4b91-b1b7-eec30f51ea1e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2929bf7f-7ab5-4b91-b1b7-eec30f51ea1e in namespace emptydir-wrapper-5193, will wait for the garbage collector to delete the pods May 11 18:17:41.408: INFO: Deleting ReplicationController wrapped-volume-race-2929bf7f-7ab5-4b91-b1b7-eec30f51ea1e took: 399.125559ms May 11 18:17:42.209: INFO: Terminating ReplicationController wrapped-volume-race-2929bf7f-7ab5-4b91-b1b7-eec30f51ea1e pods took: 800.289402ms STEP: Creating RC which spawns configmap-volume pods May 11 18:18:37.587: INFO: Pod name wrapped-volume-race-d9ebfcc4-0a91-45aa-b5e8-f65768f4546b: Found 0 pods out of 5 May 11 18:18:42.804: INFO: Pod name wrapped-volume-race-d9ebfcc4-0a91-45aa-b5e8-f65768f4546b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d9ebfcc4-0a91-45aa-b5e8-f65768f4546b in namespace emptydir-wrapper-5193, will wait for the garbage collector to delete the pods May 11 18:18:56.956: INFO: Deleting ReplicationController wrapped-volume-race-d9ebfcc4-0a91-45aa-b5e8-f65768f4546b took: 7.014549ms May 11 18:18:57.256: INFO: Terminating ReplicationController wrapped-volume-race-d9ebfcc4-0a91-45aa-b5e8-f65768f4546b pods took: 300.255017ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:19:54.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5193" for this suite. May 11 18:20:08.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:20:08.998: INFO: namespace emptydir-wrapper-5193 deletion completed in 14.103486431s • [SLOW TEST:256.253 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:20:08.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 11 18:20:09.063: INFO: Waiting up to 5m0s for pod "client-containers-17052788-5e41-4de8-8ee6-03a103de6730" in namespace "containers-4494" to be "success or failure" May 11 18:20:09.076: INFO: Pod "client-containers-17052788-5e41-4de8-8ee6-03a103de6730": Phase="Pending", Reason="", readiness=false. Elapsed: 12.618549ms May 11 18:20:11.080: INFO: Pod "client-containers-17052788-5e41-4de8-8ee6-03a103de6730": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017242116s May 11 18:20:13.212: INFO: Pod "client-containers-17052788-5e41-4de8-8ee6-03a103de6730": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14882709s May 11 18:20:15.321: INFO: Pod "client-containers-17052788-5e41-4de8-8ee6-03a103de6730": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258091537s May 11 18:20:17.325: INFO: Pod "client-containers-17052788-5e41-4de8-8ee6-03a103de6730": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.261998226s STEP: Saw pod success May 11 18:20:17.325: INFO: Pod "client-containers-17052788-5e41-4de8-8ee6-03a103de6730" satisfied condition "success or failure" May 11 18:20:17.329: INFO: Trying to get logs from node iruya-worker pod client-containers-17052788-5e41-4de8-8ee6-03a103de6730 container test-container: STEP: delete the pod May 11 18:20:17.926: INFO: Waiting for pod client-containers-17052788-5e41-4de8-8ee6-03a103de6730 to disappear May 11 18:20:17.967: INFO: Pod client-containers-17052788-5e41-4de8-8ee6-03a103de6730 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:20:17.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4494" for this suite. May 11 18:20:26.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:20:26.263: INFO: namespace containers-4494 deletion completed in 8.293032351s • [SLOW TEST:17.265 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:20:26.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-a1651378-c1bf-434d-8b2b-499f2f55d2e2 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:20:26.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4262" for this suite. May 11 18:20:34.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:20:34.952: INFO: namespace configmap-4262 deletion completed in 8.462421126s • [SLOW TEST:8.689 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:20:34.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 11 18:20:43.495: INFO: 9 pods remaining May 11 18:20:43.495: INFO: 0 pods has nil DeletionTimestamp May 11 18:20:43.495: INFO: May 11 18:20:44.352: INFO: 0 pods remaining May 11 18:20:44.352: INFO: 0 pods has nil DeletionTimestamp May 11 18:20:44.353: INFO: STEP: Gathering metrics W0511 18:20:46.345269 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 18:20:46.345: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:20:46.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1307" for this suite. May 11 18:20:56.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:20:56.742: INFO: namespace gc-1307 deletion completed in 10.393800499s • [SLOW TEST:21.790 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:20:56.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-e2b32359-32e9-4cdc-99cb-d1f7a9a25bf4 in namespace container-probe-8954 May 11 18:21:04.930: INFO: Started pod liveness-e2b32359-32e9-4cdc-99cb-d1f7a9a25bf4 in namespace container-probe-8954 STEP: checking the pod's current state and verifying that restartCount is present May 11 18:21:04.932: INFO: Initial restart count of pod liveness-e2b32359-32e9-4cdc-99cb-d1f7a9a25bf4 is 0 May 11 18:21:28.279: INFO: Restart count of pod container-probe-8954/liveness-e2b32359-32e9-4cdc-99cb-d1f7a9a25bf4 is now 1 (23.347565279s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:21:28.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8954" for this suite. May 11 18:21:37.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:21:37.423: INFO: namespace container-probe-8954 deletion completed in 8.801975476s • [SLOW TEST:40.680 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:21:37.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 18:21:46.427: INFO: Successfully updated pod "pod-update-58dcca43-21d1-4c31-9cef-8811301d0359" STEP: verifying the updated pod is in kubernetes May 11 18:21:46.444: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:21:46.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5979" for this suite. May 11 18:22:08.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:22:08.543: INFO: namespace pods-5979 deletion completed in 22.095615525s • [SLOW TEST:31.120 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:22:08.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 18:22:08.910: INFO: Creating deployment "nginx-deployment" May 11 18:22:09.486: INFO: Waiting for observed generation 1 May 11 18:22:11.831: INFO: Waiting for all required pods to come up May 11 18:22:12.103: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 11 18:22:26.525: INFO: Waiting for deployment "nginx-deployment" to complete May 11 18:22:26.532: INFO: Updating deployment "nginx-deployment" with a non-existent image May 11 18:22:26.538: INFO: Updating deployment nginx-deployment May 11 18:22:26.538: INFO: Waiting for observed generation 2 May 11 18:22:29.641: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 11 18:22:30.420: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 11 18:22:30.899: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 11 18:22:31.639: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 11 18:22:31.639: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 11 18:22:32.105: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 11 18:22:32.181: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 11 18:22:32.181: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 11 18:22:32.657: INFO: Updating deployment nginx-deployment May 11 18:22:32.657: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 11 18:22:34.004: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 11 18:22:37.729: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 11 18:22:38.550: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2906,SelfLink:/apis/apps/v1/namespaces/deployment-2906/deployments/nginx-deployment,UID:bd11b093-e987-497b-be84-4d03e1cf9320,ResourceVersion:10298683,Generation:3,CreationTimestamp:2020-05-11 18:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-11 18:22:32 +0000 UTC 2020-05-11 18:22:32 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-11 18:22:35 +0000 UTC 2020-05-11 18:22:09 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 11 18:22:38.555: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2906,SelfLink:/apis/apps/v1/namespaces/deployment-2906/replicasets/nginx-deployment-55fb7cb77f,UID:72b52aa5-61e8-48e1-bd7e-bd00f0cb52df,ResourceVersion:10298670,Generation:3,CreationTimestamp:2020-05-11 18:22:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment bd11b093-e987-497b-be84-4d03e1cf9320 0xc002614ca7 0xc002614ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 18:22:38.555: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 11 18:22:38.555: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2906,SelfLink:/apis/apps/v1/namespaces/deployment-2906/replicasets/nginx-deployment-7b8c6f4498,UID:be895e54-9e87-4d37-9bfc-2acb383e8049,ResourceVersion:10298663,Generation:3,CreationTimestamp:2020-05-11 18:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment bd11b093-e987-497b-be84-4d03e1cf9320 0xc002614d87 0xc002614d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 11 18:22:39.135: INFO: Pod "nginx-deployment-55fb7cb77f-2rgqr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2rgqr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-2rgqr,UID:e9ded6ae-9449-4101-9f1a-292f5191da98,ResourceVersion:10298586,Generation:0,CreationTimestamp:2020-05-11 18:22:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc00312f517 0xc00312f518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00312f590} {node.kubernetes.io/unreachable Exists NoExecute 0xc00312f5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-11 18:22:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.135: INFO: Pod "nginx-deployment-55fb7cb77f-5l478" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5l478,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-5l478,UID:edc5ac51-b02d-4e77-9064-6fb0079e1be3,ResourceVersion:10298605,Generation:0,CreationTimestamp:2020-05-11 18:22:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc00312f687 0xc00312f688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00312f700} {node.kubernetes.io/unreachable Exists NoExecute 0xc00312f720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.203,StartTime:2020-05-11 18:22:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.136: INFO: Pod "nginx-deployment-55fb7cb77f-82fdl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-82fdl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-82fdl,UID:e8e03bbe-1adf-41bf-b5b1-1f009c7ee839,ResourceVersion:10298596,Generation:0,CreationTimestamp:2020-05-11 18:22:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc00312f817 0xc00312f818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00312f890} {node.kubernetes.io/unreachable Exists NoExecute 0xc00312f8b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.157,StartTime:2020-05-11 18:22:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.136: INFO: Pod "nginx-deployment-55fb7cb77f-8flnm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8flnm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-8flnm,UID:31c83708-a192-4f15-8451-617307bfb3d4,ResourceVersion:10298698,Generation:0,CreationTimestamp:2020-05-11 18:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc00312f9a7 0xc00312f9a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00312fa20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00312fa40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-11 18:22:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.136: INFO: Pod "nginx-deployment-55fb7cb77f-96f8d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-96f8d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-96f8d,UID:dc7f0cfa-379d-4c9a-b0a2-54faad851a1a,ResourceVersion:10298674,Generation:0,CreationTimestamp:2020-05-11 18:22:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc00312fb17 0xc00312fb18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00312fb90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00312fbc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.204,StartTime:2020-05-11 18:22:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.136: INFO: Pod "nginx-deployment-55fb7cb77f-9ffcp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9ffcp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-9ffcp,UID:e403a3c7-28af-4b06-907e-a35bc7e9961a,ResourceVersion:10298684,Generation:0,CreationTimestamp:2020-05-11 18:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc00312fcb7 0xc00312fcb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00312fd30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00312fd50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-11 18:22:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.136: INFO: Pod "nginx-deployment-55fb7cb77f-bpwj8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bpwj8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-bpwj8,UID:7a157c64-d75b-490d-9af4-ff3957b2c5a4,ResourceVersion:10298657,Generation:0,CreationTimestamp:2020-05-11 18:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc00312fe37 0xc00312fe38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00312feb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00312fed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.137: INFO: Pod "nginx-deployment-55fb7cb77f-c94bw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c94bw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-c94bw,UID:8a9e81b6-2247-4483-aad7-42d91f4d6217,ResourceVersion:10298655,Generation:0,CreationTimestamp:2020-05-11 18:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc00312ff57 0xc00312ff58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00312ffd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00312fff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.137: INFO: Pod "nginx-deployment-55fb7cb77f-cmbv4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cmbv4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-cmbv4,UID:ceb9853f-c7bb-46ed-8fbc-1d656ef38c83,ResourceVersion:10298661,Generation:0,CreationTimestamp:2020-05-11 18:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc003112157 0xc003112158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003112250} {node.kubernetes.io/unreachable Exists NoExecute 0xc003112270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.137: INFO: Pod "nginx-deployment-55fb7cb77f-cnqcl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cnqcl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-cnqcl,UID:53d86cea-b831-40df-a8da-adb681eb616c,ResourceVersion:10298710,Generation:0,CreationTimestamp:2020-05-11 18:22:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc003112327 0xc003112328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031123b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031123d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.158,StartTime:2020-05-11 18:22:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.137: INFO: Pod "nginx-deployment-55fb7cb77f-d8f2l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d8f2l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-d8f2l,UID:6751da41-8299-49b2-b643-03d3359ba002,ResourceVersion:10298654,Generation:0,CreationTimestamp:2020-05-11 18:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc0031124c7 0xc0031124c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003112540} {node.kubernetes.io/unreachable Exists NoExecute 0xc003112560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.137: INFO: Pod "nginx-deployment-55fb7cb77f-gdzgm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gdzgm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-gdzgm,UID:6ebc633e-edbb-4b2f-9016-9bc0da745f14,ResourceVersion:10298650,Generation:0,CreationTimestamp:2020-05-11 18:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc0031125e7 0xc0031125e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031126a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031126c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.138: INFO: Pod "nginx-deployment-55fb7cb77f-h6s9g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-h6s9g,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-55fb7cb77f-h6s9g,UID:6a9b0b7f-e20c-4961-89bb-f80d73548568,ResourceVersion:10298712,Generation:0,CreationTimestamp:2020-05-11 18:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 72b52aa5-61e8-48e1-bd7e-bd00f0cb52df 0xc003112757 0xc003112758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031127d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003112800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-11 18:22:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.138: INFO: Pod "nginx-deployment-7b8c6f4498-6t7mw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6t7mw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-6t7mw,UID:62021c14-acba-42e8-ab5f-dafeb12b9561,ResourceVersion:10298515,Generation:0,CreationTimestamp:2020-05-11 18:22:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc0031128d7 0xc0031128d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003112950} {node.kubernetes.io/unreachable Exists NoExecute 0xc003112970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.155,StartTime:2020-05-11 18:22:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 18:22:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b9297acc9acf0549db4174842353b5c466b3fab5937bfef962071e75fd6bd8f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.138: INFO: Pod "nginx-deployment-7b8c6f4498-79dxv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-79dxv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-79dxv,UID:52d3e84a-bc18-4dc5-975b-db16ba56fb78,ResourceVersion:10298498,Generation:0,CreationTimestamp:2020-05-11 18:22:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc003112b17 0xc003112b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003112fd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003112ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.202,StartTime:2020-05-11 18:22:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 18:22:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://44a154db1bdff92aa228039d1fbbce37f9ac1531dceabeac5c03e56189f5222b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.139: INFO: Pod "nginx-deployment-7b8c6f4498-7c5mz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7c5mz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-7c5mz,UID:1cfc4f34-4232-4e4e-b9f0-75537571e858,ResourceVersion:10298701,Generation:0,CreationTimestamp:2020-05-11 18:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc0031131d7 0xc0031131d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031132e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003113300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-11 18:22:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.139: INFO: Pod "nginx-deployment-7b8c6f4498-7fhjx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7fhjx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-7fhjx,UID:639fd6c5-be96-45cb-a5af-3a12ee37d7c8,ResourceVersion:10298688,Generation:0,CreationTimestamp:2020-05-11 18:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc0031135f7 0xc0031135f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003113cd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003113db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-11 18:22:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.139: INFO: Pod "nginx-deployment-7b8c6f4498-8tlrx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8tlrx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-8tlrx,UID:5a5a82b2-1afd-4298-8a15-a5151d3662d0,ResourceVersion:10298665,Generation:0,CreationTimestamp:2020-05-11 18:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc0027100b7 0xc0027100b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002710130} {node.kubernetes.io/unreachable Exists NoExecute 0xc002710150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-11 18:22:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.139: INFO: Pod "nginx-deployment-7b8c6f4498-bhwfp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bhwfp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-bhwfp,UID:ee8f48ff-f156-4bd7-a6b1-e037df4005ce,ResourceVersion:10298651,Generation:0,CreationTimestamp:2020-05-11 18:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc002710237 0xc002710238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027102b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027102d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.139: INFO: Pod "nginx-deployment-7b8c6f4498-cj2nx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cj2nx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-cj2nx,UID:f43625d9-0953-4af1-bfbe-37f6ec7f6c09,ResourceVersion:10298642,Generation:0,CreationTimestamp:2020-05-11 18:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc002710357 0xc002710358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027103e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002710400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.140: INFO: Pod "nginx-deployment-7b8c6f4498-crrm2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-crrm2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-crrm2,UID:fd38362b-ad22-4112-bd6d-5b3c5dd8ef15,ResourceVersion:10298484,Generation:0,CreationTimestamp:2020-05-11 18:22:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc002710487 0xc002710488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027108b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027108d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.152,StartTime:2020-05-11 18:22:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 18:22:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b90942dcdd586fc194fd60a985fd7d4171bbb9ef02e385a90142d9d3b059f445}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.140: INFO: Pod "nginx-deployment-7b8c6f4498-fkpd6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fkpd6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-fkpd6,UID:93f8d282-0584-4338-83a9-1e3d93a8a2f4,ResourceVersion:10298649,Generation:0,CreationTimestamp:2020-05-11 18:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc0027109a7 0xc0027109a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002710a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002710a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.140: INFO: Pod "nginx-deployment-7b8c6f4498-fnmwq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fnmwq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-fnmwq,UID:14ff7373-1ea7-4a6b-9fbc-a23ad147e6e9,ResourceVersion:10298502,Generation:0,CreationTimestamp:2020-05-11 18:22:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc002710ac7 0xc002710ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002710b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002710b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.156,StartTime:2020-05-11 18:22:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 18:22:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://47c6e07eefd6db98513808a6aaae3cfbce441ff767c96d773e2ded70f4d829ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.140: INFO: Pod "nginx-deployment-7b8c6f4498-fpw2v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fpw2v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-fpw2v,UID:759a9de2-ba57-4bcd-9419-7e1f9ef710a7,ResourceVersion:10298708,Generation:0,CreationTimestamp:2020-05-11 18:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc002710c37 0xc002710c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002710cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002710cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-11 18:22:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.141: INFO: Pod "nginx-deployment-7b8c6f4498-g66dx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g66dx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-g66dx,UID:67fd2124-f8bc-4275-80e2-bfba7c8a250a,ResourceVersion:10298511,Generation:0,CreationTimestamp:2020-05-11 18:22:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc002710d97 0xc002710d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002710e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002710e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.153,StartTime:2020-05-11 18:22:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 18:22:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f8173433acc1d4a3fb74cb680b618fbe26865b58cc74627010828d439b8fae56}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.141: INFO: Pod "nginx-deployment-7b8c6f4498-mz5zl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mz5zl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-mz5zl,UID:1329b14c-8bfe-45d4-9b08-2c9c2cfa9a5f,ResourceVersion:10298475,Generation:0,CreationTimestamp:2020-05-11 18:22:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc002710f07 0xc002710f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002710f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002710fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.199,StartTime:2020-05-11 18:22:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 18:22:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5f646fe18829c15e6a4aebff31899a32d761602a97d03d286a4b9d4622c75230}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.141: INFO: Pod "nginx-deployment-7b8c6f4498-nsjjj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nsjjj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-nsjjj,UID:199ec508-46ef-45c5-a666-48839e718333,ResourceVersion:10298653,Generation:0,CreationTimestamp:2020-05-11 18:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc002711077 0xc002711078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027110f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002711110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.141: INFO: Pod "nginx-deployment-7b8c6f4498-s7ddl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s7ddl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-s7ddl,UID:4987ad6c-3c03-40f4-ba6d-2ea082fbfc26,ResourceVersion:10298658,Generation:0,CreationTimestamp:2020-05-11 18:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc002711197 0xc002711198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002711210} {node.kubernetes.io/unreachable Exists NoExecute 0xc002711230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.141: INFO: Pod "nginx-deployment-7b8c6f4498-slxpc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-slxpc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-slxpc,UID:3eda069f-f4f2-4c31-bfde-988aa15b4797,ResourceVersion:10298652,Generation:0,CreationTimestamp:2020-05-11 18:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc0027112b7 0xc0027112b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002711330} {node.kubernetes.io/unreachable Exists NoExecute 0xc002711350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.142: INFO: Pod "nginx-deployment-7b8c6f4498-stcgw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-stcgw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-stcgw,UID:932f67a8-9f75-4628-8b7f-78f78db9cea5,ResourceVersion:10298507,Generation:0,CreationTimestamp:2020-05-11 18:22:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc0027113d7 0xc0027113d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002711450} {node.kubernetes.io/unreachable Exists NoExecute 0xc002711470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.154,StartTime:2020-05-11 18:22:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 18:22:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d80d95f34068ce9236517b35a4a78fef1c811c8855edb5d4292fb8091415bbb7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.142: INFO: Pod "nginx-deployment-7b8c6f4498-wnvps" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wnvps,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-wnvps,UID:351c6ed4-9994-41f3-b880-ee790bf6aa48,ResourceVersion:10298473,Generation:0,CreationTimestamp:2020-05-11 18:22:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc002711547 0xc002711548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027115c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027115e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.198,StartTime:2020-05-11 18:22:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 18:22:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c764eebaf55ad6dc70f0f2bd52dac11e9850d3e9ddc2ccf30d1aa1587fe0ccff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.142: INFO: Pod "nginx-deployment-7b8c6f4498-xlfw7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xlfw7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-xlfw7,UID:8bc85446-0311-47b7-84c8-d183f9464ca5,ResourceVersion:10298706,Generation:0,CreationTimestamp:2020-05-11 18:22:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc0027116b7 0xc0027116b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002711730} {node.kubernetes.io/unreachable Exists NoExecute 0xc002711750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-11 18:22:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 18:22:39.142: INFO: Pod "nginx-deployment-7b8c6f4498-zs8rx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zs8rx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2906,SelfLink:/api/v1/namespaces/deployment-2906/pods/nginx-deployment-7b8c6f4498-zs8rx,UID:bf70c17c-6bef-4cdb-a558-f840ffeb3f54,ResourceVersion:10298671,Generation:0,CreationTimestamp:2020-05-11 18:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 be895e54-9e87-4d37-9bfc-2acb383e8049 0xc002711817 0xc002711818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dxjx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxjx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxjx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002711890} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027118b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:22:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-11 18:22:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:22:39.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2906" for this suite. May 11 18:23:22.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:23:22.485: INFO: namespace deployment-2906 deletion completed in 42.090421743s • [SLOW TEST:73.942 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:23:22.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 11 18:23:22.615: INFO: Waiting up to 5m0s for pod "client-containers-a120ae37-233d-4ae7-ab18-9697cf9184cf" in namespace "containers-974" to be "success or failure" May 11 18:23:22.648: INFO: Pod "client-containers-a120ae37-233d-4ae7-ab18-9697cf9184cf": Phase="Pending", Reason="", readiness=false. Elapsed: 32.356357ms May 11 18:23:25.091: INFO: Pod "client-containers-a120ae37-233d-4ae7-ab18-9697cf9184cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475148119s May 11 18:23:27.096: INFO: Pod "client-containers-a120ae37-233d-4ae7-ab18-9697cf9184cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480959375s May 11 18:23:29.100: INFO: Pod "client-containers-a120ae37-233d-4ae7-ab18-9697cf9184cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.484332731s STEP: Saw pod success May 11 18:23:29.100: INFO: Pod "client-containers-a120ae37-233d-4ae7-ab18-9697cf9184cf" satisfied condition "success or failure" May 11 18:23:29.102: INFO: Trying to get logs from node iruya-worker pod client-containers-a120ae37-233d-4ae7-ab18-9697cf9184cf container test-container: STEP: delete the pod May 11 18:23:29.138: INFO: Waiting for pod client-containers-a120ae37-233d-4ae7-ab18-9697cf9184cf to disappear May 11 18:23:29.144: INFO: Pod client-containers-a120ae37-233d-4ae7-ab18-9697cf9184cf no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:23:29.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-974" for this suite. May 11 18:23:37.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:23:37.511: INFO: namespace containers-974 deletion completed in 8.363705938s • [SLOW TEST:15.025 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:23:37.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 11 18:23:38.358: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix228304528/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:23:38.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7996" for this suite. May 11 18:23:45.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:23:45.809: INFO: namespace kubectl-7996 deletion completed in 7.155867275s • [SLOW TEST:8.299 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:23:45.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 18:23:45.928: INFO: Creating ReplicaSet my-hostname-basic-890238f0-6329-42f6-9b26-5e77eec4c5cc May 11 18:23:45.956: INFO: Pod name my-hostname-basic-890238f0-6329-42f6-9b26-5e77eec4c5cc: Found 0 pods out of 1 May 11 18:23:51.170: INFO: Pod name my-hostname-basic-890238f0-6329-42f6-9b26-5e77eec4c5cc: Found 1 pods out of 1 May 11 18:23:51.171: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-890238f0-6329-42f6-9b26-5e77eec4c5cc" is running May 11 18:23:55.423: INFO: Pod "my-hostname-basic-890238f0-6329-42f6-9b26-5e77eec4c5cc-vmmwr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 18:23:46 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 18:23:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-890238f0-6329-42f6-9b26-5e77eec4c5cc]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 18:23:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-890238f0-6329-42f6-9b26-5e77eec4c5cc]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 18:23:45 +0000 UTC Reason: Message:}]) May 11 18:23:55.423: INFO: Trying to dial the pod May 11 18:24:00.435: INFO: Controller my-hostname-basic-890238f0-6329-42f6-9b26-5e77eec4c5cc: Got expected result from replica 1 [my-hostname-basic-890238f0-6329-42f6-9b26-5e77eec4c5cc-vmmwr]: "my-hostname-basic-890238f0-6329-42f6-9b26-5e77eec4c5cc-vmmwr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:24:00.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2816" for this suite. May 11 18:24:14.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:24:14.942: INFO: namespace replicaset-2816 deletion completed in 14.50320351s • [SLOW TEST:29.132 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:24:14.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 18:24:25.132: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 18:24:25.164: INFO: Pod pod-with-prestop-http-hook still exists May 11 18:24:27.164: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 18:24:27.342: INFO: Pod pod-with-prestop-http-hook still exists May 11 18:24:29.164: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 18:24:29.467: INFO: Pod pod-with-prestop-http-hook still exists May 11 18:24:31.164: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 18:24:31.182: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:24:31.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1776" for this suite. May 11 18:24:49.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:24:50.584: INFO: namespace container-lifecycle-hook-1776 deletion completed in 19.393914104s • [SLOW TEST:35.641 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:24:50.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0511 18:25:23.753336 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 18:25:23.753: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:25:23.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9844" for this suite. May 11 18:25:31.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:25:31.918: INFO: namespace gc-9844 deletion completed in 8.162726401s • [SLOW TEST:41.334 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:25:31.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 18:25:32.023: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.302807ms) May 11 18:25:32.027: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.512001ms) May 11 18:25:32.030: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.835553ms) May 11 18:25:32.033: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.805474ms) May 11 18:25:32.037: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.698776ms) May 11 18:25:32.040: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.810226ms) May 11 18:25:32.044: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.509733ms) May 11 18:25:32.046: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.82881ms) May 11 18:25:32.049: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.978878ms) May 11 18:25:32.052: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.61947ms) May 11 18:25:32.055: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.896974ms) May 11 18:25:32.058: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.895612ms) May 11 18:25:32.060: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.253514ms) May 11 18:25:32.063: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.637245ms) May 11 18:25:32.065: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.477467ms) May 11 18:25:32.068: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.390004ms) May 11 18:25:32.070: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.425836ms) May 11 18:25:32.115: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 45.300194ms) May 11 18:25:32.152: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 36.433071ms) May 11 18:25:32.156: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.095388ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:25:32.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3275" for this suite. May 11 18:25:40.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:25:40.433: INFO: namespace proxy-3275 deletion completed in 8.27364337s • [SLOW TEST:8.515 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:25:40.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 18:25:48.360: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:25:49.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7838" for this suite. May 11 18:25:55.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:25:55.602: INFO: namespace container-runtime-7838 deletion completed in 6.373973471s • [SLOW TEST:15.169 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:25:55.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 18:26:03.051: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:26:03.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1583" for this suite. May 11 18:26:11.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:26:11.470: INFO: namespace container-runtime-1583 deletion completed in 8.238246436s • [SLOW TEST:15.868 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:26:11.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:26:22.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4521" for this suite. May 11 18:26:31.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:26:31.375: INFO: namespace emptydir-wrapper-4521 deletion completed in 8.42029727s • [SLOW TEST:19.904 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:26:31.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 11 18:26:36.764: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:26:37.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6273" for this suite. May 11 18:27:01.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:27:01.951: INFO: namespace replicaset-6273 deletion completed in 24.156734669s • [SLOW TEST:30.576 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:27:01.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-b1e616ac-4874-4f73-99a7-a37c74bd644a STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:27:10.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2469" for this suite. May 11 18:27:32.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:27:32.224: INFO: namespace configmap-2469 deletion completed in 22.11655392s • [SLOW TEST:30.273 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:27:32.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 18:27:32.375: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.239976ms) May 11 18:27:32.378: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.053562ms) May 11 18:27:32.381: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.897106ms) May 11 18:27:32.384: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.297473ms) May 11 18:27:32.387: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.266527ms) May 11 18:27:32.390: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.790622ms) May 11 18:27:32.393: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.974819ms) May 11 18:27:32.397: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.234991ms) May 11 18:27:32.472: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 75.239161ms) May 11 18:27:32.476: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.029323ms) May 11 18:27:32.479: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.188334ms) May 11 18:27:32.482: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.609299ms) May 11 18:27:32.485: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.033385ms) May 11 18:27:32.488: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.752614ms) May 11 18:27:32.491: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.86936ms) May 11 18:27:32.493: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.886169ms) May 11 18:27:32.496: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.864191ms) May 11 18:27:32.499: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.785994ms) May 11 18:27:32.502: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.769299ms) May 11 18:27:32.505: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.912536ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:27:32.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9681" for this suite. May 11 18:27:38.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:27:38.595: INFO: namespace proxy-9681 deletion completed in 6.087609316s • [SLOW TEST:6.371 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:27:38.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-4564f4ff-6922-4754-9c9f-a6ab59ff534c STEP: Creating a pod to test consume secrets May 11 18:27:38.966: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8e650855-daa0-4409-8580-2543fa076068" in namespace "projected-4709" to be "success or failure" May 11 18:27:39.090: INFO: Pod "pod-projected-secrets-8e650855-daa0-4409-8580-2543fa076068": Phase="Pending", Reason="", readiness=false. Elapsed: 123.798242ms May 11 18:27:41.093: INFO: Pod "pod-projected-secrets-8e650855-daa0-4409-8580-2543fa076068": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126774507s May 11 18:27:43.096: INFO: Pod "pod-projected-secrets-8e650855-daa0-4409-8580-2543fa076068": Phase="Running", Reason="", readiness=true. Elapsed: 4.129360125s May 11 18:27:45.099: INFO: Pod "pod-projected-secrets-8e650855-daa0-4409-8580-2543fa076068": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133019174s STEP: Saw pod success May 11 18:27:45.099: INFO: Pod "pod-projected-secrets-8e650855-daa0-4409-8580-2543fa076068" satisfied condition "success or failure" May 11 18:27:45.101: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-8e650855-daa0-4409-8580-2543fa076068 container projected-secret-volume-test: STEP: delete the pod May 11 18:27:45.280: INFO: Waiting for pod pod-projected-secrets-8e650855-daa0-4409-8580-2543fa076068 to disappear May 11 18:27:45.311: INFO: Pod pod-projected-secrets-8e650855-daa0-4409-8580-2543fa076068 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:27:45.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4709" for this suite. May 11 18:27:51.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:27:51.416: INFO: namespace projected-4709 deletion completed in 6.101293712s • [SLOW TEST:12.820 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:27:51.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-9bfhr in namespace proxy-6069 I0511 18:27:52.028502 7 runners.go:180] Created replication controller with name: proxy-service-9bfhr, namespace: proxy-6069, replica count: 1 I0511 18:27:53.079049 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:27:54.079256 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:27:55.079474 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:27:56.079738 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:27:57.079916 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:27:58.080081 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:27:59.080265 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 18:28:00.080478 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 18:28:01.080735 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 18:28:02.080930 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 18:28:03.081214 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 18:28:04.081371 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 18:28:05.081547 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 18:28:06.081686 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 18:28:07.081842 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 18:28:08.082059 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 18:28:09.082233 7 runners.go:180] proxy-service-9bfhr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 18:28:09.085: INFO: setup took 17.437468559s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 11 18:28:09.090: INFO: (0) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 4.331888ms) May 11 18:28:09.090: INFO: (0) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 4.767054ms) May 11 18:28:09.090: INFO: (0) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 4.989744ms) May 11 18:28:09.091: INFO: (0) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 5.055138ms) May 11 18:28:09.091: INFO: (0) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 5.279178ms) May 11 18:28:09.091: INFO: (0) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 5.312549ms) May 11 18:28:09.091: INFO: (0) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 5.511774ms) May 11 18:28:09.091: INFO: (0) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 5.552243ms) May 11 18:28:09.092: INFO: (0) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 6.615332ms) May 11 18:28:09.093: INFO: (0) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 6.947405ms) May 11 18:28:09.093: INFO: (0) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 7.197103ms) May 11 18:28:09.096: INFO: (0) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: ... (200; 3.602073ms) May 11 18:28:09.104: INFO: (1) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 3.629031ms) May 11 18:28:09.104: INFO: (1) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 3.602403ms) May 11 18:28:09.106: INFO: (1) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 5.540123ms) May 11 18:28:09.106: INFO: (1) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 5.772373ms) May 11 18:28:09.106: INFO: (1) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 5.902062ms) May 11 18:28:09.106: INFO: (1) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 5.915043ms) May 11 18:28:09.106: INFO: (1) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 5.964202ms) May 11 18:28:09.106: INFO: (1) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 5.97501ms) May 11 18:28:09.107: INFO: (1) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 6.131246ms) May 11 18:28:09.107: INFO: (1) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 6.119909ms) May 11 18:28:09.107: INFO: (1) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname1/proxy/: tls baz (200; 6.137365ms) May 11 18:28:09.107: INFO: (1) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 6.216859ms) May 11 18:28:09.112: INFO: (2) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 5.39259ms) May 11 18:28:09.112: INFO: (2) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: ... (200; 6.399852ms) May 11 18:28:09.113: INFO: (2) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 6.373929ms) May 11 18:28:09.113: INFO: (2) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 6.434605ms) May 11 18:28:09.113: INFO: (2) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 6.404036ms) May 11 18:28:09.113: INFO: (2) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 6.382004ms) May 11 18:28:09.113: INFO: (2) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 6.703049ms) May 11 18:28:09.113: INFO: (2) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 6.717961ms) May 11 18:28:09.113: INFO: (2) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname1/proxy/: tls baz (200; 6.706004ms) May 11 18:28:09.113: INFO: (2) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 6.761479ms) May 11 18:28:09.114: INFO: (2) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 6.836607ms) May 11 18:28:09.114: INFO: (2) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 6.990617ms) May 11 18:28:09.114: INFO: (2) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 7.073056ms) May 11 18:28:09.114: INFO: (2) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 7.622012ms) May 11 18:28:09.119: INFO: (3) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 4.274859ms) May 11 18:28:09.119: INFO: (3) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 4.666792ms) May 11 18:28:09.119: INFO: (3) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 4.688614ms) May 11 18:28:09.119: INFO: (3) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 4.749069ms) May 11 18:28:09.119: INFO: (3) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 4.69763ms) May 11 18:28:09.119: INFO: (3) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 4.971522ms) May 11 18:28:09.119: INFO: (3) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 5.090292ms) May 11 18:28:09.120: INFO: (3) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 5.739727ms) May 11 18:28:09.120: INFO: (3) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 5.99105ms) May 11 18:28:09.121: INFO: (3) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 6.320025ms) May 11 18:28:09.121: INFO: (3) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname1/proxy/: tls baz (200; 6.316933ms) May 11 18:28:09.121: INFO: (3) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 6.436183ms) May 11 18:28:09.121: INFO: (3) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: test<... (200; 3.944323ms) May 11 18:28:09.125: INFO: (4) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 3.952515ms) May 11 18:28:09.125: INFO: (4) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 4.01293ms) May 11 18:28:09.125: INFO: (4) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 4.217427ms) May 11 18:28:09.125: INFO: (4) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 4.236104ms) May 11 18:28:09.125: INFO: (4) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 4.197015ms) May 11 18:28:09.125: INFO: (4) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 4.181314ms) May 11 18:28:09.125: INFO: (4) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 4.451736ms) May 11 18:28:09.125: INFO: (4) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 4.441433ms) May 11 18:28:09.126: INFO: (4) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 4.761182ms) May 11 18:28:09.126: INFO: (4) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 5.330192ms) May 11 18:28:09.126: INFO: (4) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 5.478372ms) May 11 18:28:09.127: INFO: (4) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 5.598308ms) May 11 18:28:09.127: INFO: (4) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname1/proxy/: tls baz (200; 6.121324ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 7.381549ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 7.476229ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 7.57225ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 7.504696ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 7.487557ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 7.619751ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 7.631918ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname1/proxy/: tls baz (200; 7.641061ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 7.684848ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 7.766583ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 7.834335ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 7.730758ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 7.895482ms) May 11 18:28:09.135: INFO: (5) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: test (200; 2.061807ms) May 11 18:28:09.140: INFO: (6) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 4.088646ms) May 11 18:28:09.140: INFO: (6) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 3.869232ms) May 11 18:28:09.140: INFO: (6) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 3.557079ms) May 11 18:28:09.140: INFO: (6) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 3.705365ms) May 11 18:28:09.140: INFO: (6) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: test<... (200; 5.587208ms) May 11 18:28:09.148: INFO: (7) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 5.790025ms) May 11 18:28:09.148: INFO: (7) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 5.766045ms) May 11 18:28:09.148: INFO: (7) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 5.863039ms) May 11 18:28:09.148: INFO: (7) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 6.068285ms) May 11 18:28:09.148: INFO: (7) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: ... (200; 7.162609ms) May 11 18:28:09.149: INFO: (7) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 7.268572ms) May 11 18:28:09.149: INFO: (7) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 7.331151ms) May 11 18:28:09.149: INFO: (7) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 7.293275ms) May 11 18:28:09.152: INFO: (8) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 2.264707ms) May 11 18:28:09.152: INFO: (8) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 2.224495ms) May 11 18:28:09.154: INFO: (8) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 4.303887ms) May 11 18:28:09.154: INFO: (8) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 4.340778ms) May 11 18:28:09.154: INFO: (8) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 4.656544ms) May 11 18:28:09.154: INFO: (8) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 4.777899ms) May 11 18:28:09.155: INFO: (8) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 5.415685ms) May 11 18:28:09.155: INFO: (8) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 5.37682ms) May 11 18:28:09.155: INFO: (8) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: ... (200; 4.31142ms) May 11 18:28:09.164: INFO: (9) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 4.710681ms) May 11 18:28:09.164: INFO: (9) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 4.177849ms) May 11 18:28:09.164: INFO: (9) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 5.19334ms) May 11 18:28:09.164: INFO: (9) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 4.512675ms) May 11 18:28:09.164: INFO: (9) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 5.126118ms) May 11 18:28:09.164: INFO: (9) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 4.697242ms) May 11 18:28:09.164: INFO: (9) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 4.574805ms) May 11 18:28:09.164: INFO: (9) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 5.675121ms) May 11 18:28:09.165: INFO: (9) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 5.980391ms) May 11 18:28:09.169: INFO: (10) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 3.266855ms) May 11 18:28:09.169: INFO: (10) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 3.759961ms) May 11 18:28:09.169: INFO: (10) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 3.12652ms) May 11 18:28:09.169: INFO: (10) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 3.42343ms) May 11 18:28:09.170: INFO: (10) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 4.109413ms) May 11 18:28:09.170: INFO: (10) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 3.885699ms) May 11 18:28:09.170: INFO: (10) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 4.03293ms) May 11 18:28:09.170: INFO: (10) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 4.122699ms) May 11 18:28:09.170: INFO: (10) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 4.310258ms) May 11 18:28:09.170: INFO: (10) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 4.238202ms) May 11 18:28:09.170: INFO: (10) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: test (200; 2.739007ms) May 11 18:28:09.174: INFO: (11) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 2.914016ms) May 11 18:28:09.174: INFO: (11) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 2.867407ms) May 11 18:28:09.176: INFO: (11) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 5.096398ms) May 11 18:28:09.176: INFO: (11) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 5.260826ms) May 11 18:28:09.176: INFO: (11) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: ... (200; 5.933947ms) May 11 18:28:09.177: INFO: (11) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 6.468067ms) May 11 18:28:09.178: INFO: (11) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 6.570929ms) May 11 18:28:09.178: INFO: (11) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 6.654908ms) May 11 18:28:09.178: INFO: (11) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 6.880176ms) May 11 18:28:09.178: INFO: (11) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname1/proxy/: tls baz (200; 7.048918ms) May 11 18:28:09.178: INFO: (11) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 7.379379ms) May 11 18:28:09.178: INFO: (11) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 7.682292ms) May 11 18:28:09.178: INFO: (11) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 7.647205ms) May 11 18:28:09.183: INFO: (12) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 4.337029ms) May 11 18:28:09.183: INFO: (12) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 4.528704ms) May 11 18:28:09.183: INFO: (12) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 4.511933ms) May 11 18:28:09.183: INFO: (12) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 4.63574ms) May 11 18:28:09.183: INFO: (12) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 4.672823ms) May 11 18:28:09.183: INFO: (12) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 4.581754ms) May 11 18:28:09.183: INFO: (12) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: ... (200; 5.071332ms) May 11 18:28:09.184: INFO: (12) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 5.016267ms) May 11 18:28:09.184: INFO: (12) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 5.090585ms) May 11 18:28:09.184: INFO: (12) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 5.184412ms) May 11 18:28:09.184: INFO: (12) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 5.291228ms) May 11 18:28:09.184: INFO: (12) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 5.404762ms) May 11 18:28:09.185: INFO: (12) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 5.979868ms) May 11 18:28:09.185: INFO: (12) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 5.99854ms) May 11 18:28:09.191: INFO: (13) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 6.617695ms) May 11 18:28:09.192: INFO: (13) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 6.609545ms) May 11 18:28:09.192: INFO: (13) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 6.929855ms) May 11 18:28:09.192: INFO: (13) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 6.920781ms) May 11 18:28:09.192: INFO: (13) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 7.179254ms) May 11 18:28:09.192: INFO: (13) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 7.091701ms) May 11 18:28:09.192: INFO: (13) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 7.143407ms) May 11 18:28:09.192: INFO: (13) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: test (200; 3.685674ms) May 11 18:28:09.198: INFO: (14) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 3.726774ms) May 11 18:28:09.198: INFO: (14) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 3.748469ms) May 11 18:28:09.198: INFO: (14) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 3.728286ms) May 11 18:28:09.198: INFO: (14) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 3.859077ms) May 11 18:28:09.198: INFO: (14) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: test<... (200; 3.830039ms) May 11 18:28:09.199: INFO: (14) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 4.251367ms) May 11 18:28:09.199: INFO: (14) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 4.309659ms) May 11 18:28:09.199: INFO: (14) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 4.4122ms) May 11 18:28:09.224: INFO: (14) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 30.197083ms) May 11 18:28:09.225: INFO: (14) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 30.562846ms) May 11 18:28:09.225: INFO: (14) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 30.586823ms) May 11 18:28:09.225: INFO: (14) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname1/proxy/: tls baz (200; 30.576858ms) May 11 18:28:09.230: INFO: (15) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 4.362698ms) May 11 18:28:09.230: INFO: (15) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 4.542926ms) May 11 18:28:09.230: INFO: (15) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 4.890122ms) May 11 18:28:09.230: INFO: (15) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 5.120443ms) May 11 18:28:09.231: INFO: (15) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 6.042352ms) May 11 18:28:09.232: INFO: (15) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 6.712146ms) May 11 18:28:09.232: INFO: (15) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 7.113137ms) May 11 18:28:09.232: INFO: (15) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 7.16099ms) May 11 18:28:09.232: INFO: (15) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 7.224419ms) May 11 18:28:09.233: INFO: (15) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 7.864809ms) May 11 18:28:09.233: INFO: (15) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 8.05817ms) May 11 18:28:09.234: INFO: (15) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname1/proxy/: tls baz (200; 8.360985ms) May 11 18:28:09.234: INFO: (15) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 8.396094ms) May 11 18:28:09.234: INFO: (15) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 8.781035ms) May 11 18:28:09.234: INFO: (15) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: test<... (200; 3.60155ms) May 11 18:28:09.239: INFO: (16) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 5.325894ms) May 11 18:28:09.240: INFO: (16) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 5.628952ms) May 11 18:28:09.240: INFO: (16) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 5.672585ms) May 11 18:28:09.240: INFO: (16) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 5.635417ms) May 11 18:28:09.240: INFO: (16) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 5.702749ms) May 11 18:28:09.240: INFO: (16) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 5.6695ms) May 11 18:28:09.240: INFO: (16) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 5.765264ms) May 11 18:28:09.240: INFO: (16) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 5.747404ms) May 11 18:28:09.240: INFO: (16) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: test (200; 5.771829ms) May 11 18:28:09.241: INFO: (16) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 6.740488ms) May 11 18:28:09.241: INFO: (16) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 6.760859ms) May 11 18:28:09.241: INFO: (16) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 6.897572ms) May 11 18:28:09.241: INFO: (16) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname1/proxy/: tls baz (200; 6.847635ms) May 11 18:28:09.245: INFO: (17) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 3.590777ms) May 11 18:28:09.245: INFO: (17) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 3.691739ms) May 11 18:28:09.245: INFO: (17) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 3.4925ms) May 11 18:28:09.245: INFO: (17) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 3.69735ms) May 11 18:28:09.245: INFO: (17) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: test (200; 3.871016ms) May 11 18:28:09.245: INFO: (17) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 3.930333ms) May 11 18:28:09.245: INFO: (17) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 4.224715ms) May 11 18:28:09.245: INFO: (17) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 4.330842ms) May 11 18:28:09.245: INFO: (17) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 4.292142ms) May 11 18:28:09.246: INFO: (17) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 4.404763ms) May 11 18:28:09.246: INFO: (17) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 4.321549ms) May 11 18:28:09.246: INFO: (17) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 4.368329ms) May 11 18:28:09.246: INFO: (17) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname1/proxy/: tls baz (200; 4.458865ms) May 11 18:28:09.247: INFO: (17) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 5.783209ms) May 11 18:28:09.250: INFO: (18) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 3.199635ms) May 11 18:28:09.250: INFO: (18) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 3.157867ms) May 11 18:28:09.251: INFO: (18) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 3.660792ms) May 11 18:28:09.264: INFO: (18) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 16.008504ms) May 11 18:28:09.291: INFO: (18) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl/proxy/: test (200; 43.499085ms) May 11 18:28:09.291: INFO: (18) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 43.202424ms) May 11 18:28:09.291: INFO: (18) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 43.424431ms) May 11 18:28:09.291: INFO: (18) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:1080/proxy/: ... (200; 43.30364ms) May 11 18:28:09.291: INFO: (18) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:460/proxy/: tls baz (200; 43.479128ms) May 11 18:28:09.291: INFO: (18) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: test (200; 3.293583ms) May 11 18:28:09.295: INFO: (19) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:462/proxy/: tls qux (200; 3.392609ms) May 11 18:28:09.295: INFO: (19) /api/v1/namespaces/proxy-6069/pods/https:proxy-service-9bfhr-gg7jl:443/proxy/: ... (200; 4.468503ms) May 11 18:28:09.297: INFO: (19) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 4.86095ms) May 11 18:28:09.297: INFO: (19) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:1080/proxy/: test<... (200; 4.998404ms) May 11 18:28:09.297: INFO: (19) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 4.91024ms) May 11 18:28:09.297: INFO: (19) /api/v1/namespaces/proxy-6069/pods/proxy-service-9bfhr-gg7jl:160/proxy/: foo (200; 4.921713ms) May 11 18:28:09.297: INFO: (19) /api/v1/namespaces/proxy-6069/pods/http:proxy-service-9bfhr-gg7jl:162/proxy/: bar (200; 4.832584ms) May 11 18:28:09.297: INFO: (19) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname1/proxy/: foo (200; 5.115929ms) May 11 18:28:09.298: INFO: (19) /api/v1/namespaces/proxy-6069/services/proxy-service-9bfhr:portname2/proxy/: bar (200; 5.413414ms) May 11 18:28:09.298: INFO: (19) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname2/proxy/: bar (200; 5.587313ms) May 11 18:28:09.298: INFO: (19) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname2/proxy/: tls qux (200; 6.142739ms) May 11 18:28:09.298: INFO: (19) /api/v1/namespaces/proxy-6069/services/http:proxy-service-9bfhr:portname1/proxy/: foo (200; 6.002215ms) May 11 18:28:09.298: INFO: (19) /api/v1/namespaces/proxy-6069/services/https:proxy-service-9bfhr:tlsportname1/proxy/: tls baz (200; 6.097786ms) STEP: deleting ReplicationController proxy-service-9bfhr in namespace proxy-6069, will wait for the garbage collector to delete the pods May 11 18:28:09.355: INFO: Deleting ReplicationController proxy-service-9bfhr took: 5.111883ms May 11 18:28:09.655: INFO: Terminating ReplicationController proxy-service-9bfhr pods took: 300.202004ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:28:21.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6069" for this suite. May 11 18:28:28.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:28:28.166: INFO: namespace proxy-6069 deletion completed in 6.207001077s • [SLOW TEST:36.751 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:28:28.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 11 18:28:28.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3721' May 11 18:28:34.904: INFO: stderr: "" May 11 18:28:34.904: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 18:28:34.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3721' May 11 18:28:35.183: INFO: stderr: "" May 11 18:28:35.183: INFO: stdout: "update-demo-nautilus-2nmqr update-demo-nautilus-ff2bw " May 11 18:28:35.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nmqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3721' May 11 18:28:35.381: INFO: stderr: "" May 11 18:28:35.381: INFO: stdout: "" May 11 18:28:35.381: INFO: update-demo-nautilus-2nmqr is created but not running May 11 18:28:40.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3721' May 11 18:28:40.487: INFO: stderr: "" May 11 18:28:40.487: INFO: stdout: "update-demo-nautilus-2nmqr update-demo-nautilus-ff2bw " May 11 18:28:40.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nmqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3721' May 11 18:28:40.605: INFO: stderr: "" May 11 18:28:40.605: INFO: stdout: "true" May 11 18:28:40.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nmqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3721' May 11 18:28:40.902: INFO: stderr: "" May 11 18:28:40.902: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:28:40.902: INFO: validating pod update-demo-nautilus-2nmqr May 11 18:28:41.162: INFO: got data: { "image": "nautilus.jpg" } May 11 18:28:41.162: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:28:41.162: INFO: update-demo-nautilus-2nmqr is verified up and running May 11 18:28:41.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ff2bw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3721' May 11 18:28:41.382: INFO: stderr: "" May 11 18:28:41.382: INFO: stdout: "true" May 11 18:28:41.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ff2bw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3721' May 11 18:28:41.482: INFO: stderr: "" May 11 18:28:41.482: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:28:41.482: INFO: validating pod update-demo-nautilus-ff2bw May 11 18:28:41.485: INFO: got data: { "image": "nautilus.jpg" } May 11 18:28:41.485: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:28:41.485: INFO: update-demo-nautilus-ff2bw is verified up and running STEP: rolling-update to new replication controller May 11 18:28:41.487: INFO: scanned /root for discovery docs: May 11 18:28:41.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3721' May 11 18:29:09.625: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 11 18:29:09.625: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 18:29:09.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3721' May 11 18:29:09.714: INFO: stderr: "" May 11 18:29:09.714: INFO: stdout: "update-demo-kitten-57l8z update-demo-kitten-h7kw9 update-demo-nautilus-2nmqr " STEP: Replicas for name=update-demo: expected=2 actual=3 May 11 18:29:14.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3721' May 11 18:29:14.847: INFO: stderr: "" May 11 18:29:14.847: INFO: stdout: "update-demo-kitten-57l8z update-demo-kitten-h7kw9 " May 11 18:29:14.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-57l8z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3721' May 11 18:29:14.941: INFO: stderr: "" May 11 18:29:14.941: INFO: stdout: "true" May 11 18:29:14.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-57l8z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3721' May 11 18:29:15.034: INFO: stderr: "" May 11 18:29:15.034: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 11 18:29:15.034: INFO: validating pod update-demo-kitten-57l8z May 11 18:29:15.037: INFO: got data: { "image": "kitten.jpg" } May 11 18:29:15.038: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 11 18:29:15.038: INFO: update-demo-kitten-57l8z is verified up and running May 11 18:29:15.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h7kw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3721' May 11 18:29:15.128: INFO: stderr: "" May 11 18:29:15.128: INFO: stdout: "true" May 11 18:29:15.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h7kw9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3721' May 11 18:29:15.219: INFO: stderr: "" May 11 18:29:15.219: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 11 18:29:15.219: INFO: validating pod update-demo-kitten-h7kw9 May 11 18:29:15.223: INFO: got data: { "image": "kitten.jpg" } May 11 18:29:15.223: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 11 18:29:15.223: INFO: update-demo-kitten-h7kw9 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:29:15.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3721" for this suite. May 11 18:29:41.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:29:41.316: INFO: namespace kubectl-3721 deletion completed in 26.089685453s • [SLOW TEST:73.149 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:29:41.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7392 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 18:29:41.826: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 18:30:12.187: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.179 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7392 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:30:12.187: INFO: >>> kubeConfig: /root/.kube/config I0511 18:30:12.222806 7 log.go:172] (0xc000e1c420) (0xc000541360) Create stream I0511 18:30:12.222838 7 log.go:172] (0xc000e1c420) (0xc000541360) Stream added, broadcasting: 1 I0511 18:30:12.224808 7 log.go:172] (0xc000e1c420) Reply frame received for 1 I0511 18:30:12.224845 7 log.go:172] (0xc000e1c420) (0xc002d5dc20) Create stream I0511 18:30:12.224865 7 log.go:172] (0xc000e1c420) (0xc002d5dc20) Stream added, broadcasting: 3 I0511 18:30:12.225992 7 log.go:172] (0xc000e1c420) Reply frame received for 3 I0511 18:30:12.226027 7 log.go:172] (0xc000e1c420) (0xc0005414a0) Create stream I0511 18:30:12.226033 7 log.go:172] (0xc000e1c420) (0xc0005414a0) Stream added, broadcasting: 5 I0511 18:30:12.227061 7 log.go:172] (0xc000e1c420) Reply frame received for 5 I0511 18:30:13.299182 7 log.go:172] (0xc000e1c420) Data frame received for 5 I0511 18:30:13.299242 7 log.go:172] (0xc0005414a0) (5) Data frame handling I0511 18:30:13.299276 7 log.go:172] (0xc000e1c420) Data frame received for 3 I0511 18:30:13.299293 7 log.go:172] (0xc002d5dc20) (3) Data frame handling I0511 18:30:13.299317 7 log.go:172] (0xc002d5dc20) (3) Data frame sent I0511 18:30:13.299337 7 log.go:172] (0xc000e1c420) Data frame received for 3 I0511 18:30:13.299354 7 log.go:172] (0xc002d5dc20) (3) Data frame handling I0511 18:30:13.302602 7 log.go:172] (0xc000e1c420) Data frame received for 1 I0511 18:30:13.302634 7 log.go:172] (0xc000541360) (1) Data frame handling I0511 18:30:13.302648 7 log.go:172] (0xc000541360) (1) Data frame sent I0511 18:30:13.302663 7 log.go:172] (0xc000e1c420) (0xc000541360) Stream removed, broadcasting: 1 I0511 18:30:13.302682 7 log.go:172] (0xc000e1c420) Go away received I0511 18:30:13.302856 7 log.go:172] (0xc000e1c420) (0xc000541360) Stream removed, broadcasting: 1 I0511 18:30:13.302886 7 log.go:172] (0xc000e1c420) (0xc002d5dc20) Stream removed, broadcasting: 3 I0511 18:30:13.302908 7 log.go:172] (0xc000e1c420) (0xc0005414a0) Stream removed, broadcasting: 5 May 11 18:30:13.302: INFO: Found all expected endpoints: [netserver-0] May 11 18:30:13.307: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.224 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7392 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:30:13.307: INFO: >>> kubeConfig: /root/.kube/config I0511 18:30:13.340083 7 log.go:172] (0xc0009bbef0) (0xc0017da000) Create stream I0511 18:30:13.340115 7 log.go:172] (0xc0009bbef0) (0xc0017da000) Stream added, broadcasting: 1 I0511 18:30:13.342296 7 log.go:172] (0xc0009bbef0) Reply frame received for 1 I0511 18:30:13.342331 7 log.go:172] (0xc0009bbef0) (0xc00233db80) Create stream I0511 18:30:13.342350 7 log.go:172] (0xc0009bbef0) (0xc00233db80) Stream added, broadcasting: 3 I0511 18:30:13.343279 7 log.go:172] (0xc0009bbef0) Reply frame received for 3 I0511 18:30:13.343330 7 log.go:172] (0xc0009bbef0) (0xc00233dc20) Create stream I0511 18:30:13.343346 7 log.go:172] (0xc0009bbef0) (0xc00233dc20) Stream added, broadcasting: 5 I0511 18:30:13.344622 7 log.go:172] (0xc0009bbef0) Reply frame received for 5 I0511 18:30:14.423870 7 log.go:172] (0xc0009bbef0) Data frame received for 5 I0511 18:30:14.423903 7 log.go:172] (0xc00233dc20) (5) Data frame handling I0511 18:30:14.423925 7 log.go:172] (0xc0009bbef0) Data frame received for 3 I0511 18:30:14.423933 7 log.go:172] (0xc00233db80) (3) Data frame handling I0511 18:30:14.423946 7 log.go:172] (0xc00233db80) (3) Data frame sent I0511 18:30:14.423957 7 log.go:172] (0xc0009bbef0) Data frame received for 3 I0511 18:30:14.423964 7 log.go:172] (0xc00233db80) (3) Data frame handling I0511 18:30:14.425821 7 log.go:172] (0xc0009bbef0) Data frame received for 1 I0511 18:30:14.425850 7 log.go:172] (0xc0017da000) (1) Data frame handling I0511 18:30:14.425866 7 log.go:172] (0xc0017da000) (1) Data frame sent I0511 18:30:14.425887 7 log.go:172] (0xc0009bbef0) (0xc0017da000) Stream removed, broadcasting: 1 I0511 18:30:14.425896 7 log.go:172] (0xc0009bbef0) Go away received I0511 18:30:14.426020 7 log.go:172] (0xc0009bbef0) (0xc0017da000) Stream removed, broadcasting: 1 I0511 18:30:14.426037 7 log.go:172] (0xc0009bbef0) (0xc00233db80) Stream removed, broadcasting: 3 I0511 18:30:14.426047 7 log.go:172] (0xc0009bbef0) (0xc00233dc20) Stream removed, broadcasting: 5 May 11 18:30:14.426: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:30:14.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7392" for this suite. May 11 18:30:40.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:30:40.543: INFO: namespace pod-network-test-7392 deletion completed in 26.112236802s • [SLOW TEST:59.227 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:30:40.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 18:30:41.570: INFO: Waiting up to 5m0s for pod "pod-f09b481b-6abe-4bf1-a5ed-0181282e963e" in namespace "emptydir-6288" to be "success or failure" May 11 18:30:41.626: INFO: Pod "pod-f09b481b-6abe-4bf1-a5ed-0181282e963e": Phase="Pending", Reason="", readiness=false. Elapsed: 56.172839ms May 11 18:30:43.766: INFO: Pod "pod-f09b481b-6abe-4bf1-a5ed-0181282e963e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196386042s May 11 18:30:45.771: INFO: Pod "pod-f09b481b-6abe-4bf1-a5ed-0181282e963e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200781681s May 11 18:30:47.775: INFO: Pod "pod-f09b481b-6abe-4bf1-a5ed-0181282e963e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205013079s May 11 18:30:49.904: INFO: Pod "pod-f09b481b-6abe-4bf1-a5ed-0181282e963e": Phase="Running", Reason="", readiness=true. Elapsed: 8.334226723s May 11 18:30:52.155: INFO: Pod "pod-f09b481b-6abe-4bf1-a5ed-0181282e963e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.585291653s STEP: Saw pod success May 11 18:30:52.155: INFO: Pod "pod-f09b481b-6abe-4bf1-a5ed-0181282e963e" satisfied condition "success or failure" May 11 18:30:52.158: INFO: Trying to get logs from node iruya-worker pod pod-f09b481b-6abe-4bf1-a5ed-0181282e963e container test-container: STEP: delete the pod May 11 18:30:53.110: INFO: Waiting for pod pod-f09b481b-6abe-4bf1-a5ed-0181282e963e to disappear May 11 18:30:53.374: INFO: Pod pod-f09b481b-6abe-4bf1-a5ed-0181282e963e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:30:53.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6288" for this suite. May 11 18:31:02.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:31:03.097: INFO: namespace emptydir-6288 deletion completed in 9.718039974s • [SLOW TEST:22.554 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:31:03.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 18:31:03.871: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a475beaf-772a-45a4-9e6c-024f5f4649e1" in namespace "projected-6717" to be "success or failure" May 11 18:31:04.115: INFO: Pod "downwardapi-volume-a475beaf-772a-45a4-9e6c-024f5f4649e1": Phase="Pending", Reason="", readiness=false. Elapsed: 244.181765ms May 11 18:31:06.270: INFO: Pod "downwardapi-volume-a475beaf-772a-45a4-9e6c-024f5f4649e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399235249s May 11 18:31:08.378: INFO: Pod "downwardapi-volume-a475beaf-772a-45a4-9e6c-024f5f4649e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.507215502s May 11 18:31:10.382: INFO: Pod "downwardapi-volume-a475beaf-772a-45a4-9e6c-024f5f4649e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.511325803s STEP: Saw pod success May 11 18:31:10.382: INFO: Pod "downwardapi-volume-a475beaf-772a-45a4-9e6c-024f5f4649e1" satisfied condition "success or failure" May 11 18:31:10.384: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a475beaf-772a-45a4-9e6c-024f5f4649e1 container client-container: STEP: delete the pod May 11 18:31:10.579: INFO: Waiting for pod downwardapi-volume-a475beaf-772a-45a4-9e6c-024f5f4649e1 to disappear May 11 18:31:10.585: INFO: Pod downwardapi-volume-a475beaf-772a-45a4-9e6c-024f5f4649e1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:31:10.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6717" for this suite. May 11 18:31:16.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:31:16.769: INFO: namespace projected-6717 deletion completed in 6.182284027s • [SLOW TEST:13.672 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:31:16.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-88bb0a50-e2b1-402f-9c9d-21837d52cd29 STEP: Creating a pod to test consume secrets May 11 18:31:17.342: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8d57c8bd-19ad-4a00-a973-b9baf7c4a621" in namespace "projected-1669" to be "success or failure" May 11 18:31:17.351: INFO: Pod "pod-projected-secrets-8d57c8bd-19ad-4a00-a973-b9baf7c4a621": Phase="Pending", Reason="", readiness=false. Elapsed: 9.765274ms May 11 18:31:19.540: INFO: Pod "pod-projected-secrets-8d57c8bd-19ad-4a00-a973-b9baf7c4a621": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19839392s May 11 18:31:21.851: INFO: Pod "pod-projected-secrets-8d57c8bd-19ad-4a00-a973-b9baf7c4a621": Phase="Pending", Reason="", readiness=false. Elapsed: 4.509270142s May 11 18:31:23.854: INFO: Pod "pod-projected-secrets-8d57c8bd-19ad-4a00-a973-b9baf7c4a621": Phase="Pending", Reason="", readiness=false. Elapsed: 6.512634379s May 11 18:31:26.174: INFO: Pod "pod-projected-secrets-8d57c8bd-19ad-4a00-a973-b9baf7c4a621": Phase="Pending", Reason="", readiness=false. Elapsed: 8.832161762s May 11 18:31:28.177: INFO: Pod "pod-projected-secrets-8d57c8bd-19ad-4a00-a973-b9baf7c4a621": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.835377748s STEP: Saw pod success May 11 18:31:28.177: INFO: Pod "pod-projected-secrets-8d57c8bd-19ad-4a00-a973-b9baf7c4a621" satisfied condition "success or failure" May 11 18:31:28.179: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-8d57c8bd-19ad-4a00-a973-b9baf7c4a621 container secret-volume-test: STEP: delete the pod May 11 18:31:28.351: INFO: Waiting for pod pod-projected-secrets-8d57c8bd-19ad-4a00-a973-b9baf7c4a621 to disappear May 11 18:31:28.381: INFO: Pod pod-projected-secrets-8d57c8bd-19ad-4a00-a973-b9baf7c4a621 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:31:28.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1669" for this suite. May 11 18:31:36.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:31:36.499: INFO: namespace projected-1669 deletion completed in 8.114995288s • [SLOW TEST:19.729 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:31:36.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 18:31:38.582: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43cbe2ec-2509-440c-87ae-2b055cb528ba" in namespace "projected-2176" to be "success or failure" May 11 18:31:38.845: INFO: Pod "downwardapi-volume-43cbe2ec-2509-440c-87ae-2b055cb528ba": Phase="Pending", Reason="", readiness=false. Elapsed: 263.049847ms May 11 18:31:40.849: INFO: Pod "downwardapi-volume-43cbe2ec-2509-440c-87ae-2b055cb528ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267429934s May 11 18:31:42.956: INFO: Pod "downwardapi-volume-43cbe2ec-2509-440c-87ae-2b055cb528ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374187316s May 11 18:31:45.247: INFO: Pod "downwardapi-volume-43cbe2ec-2509-440c-87ae-2b055cb528ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.665175239s May 11 18:31:47.251: INFO: Pod "downwardapi-volume-43cbe2ec-2509-440c-87ae-2b055cb528ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.669455941s STEP: Saw pod success May 11 18:31:47.252: INFO: Pod "downwardapi-volume-43cbe2ec-2509-440c-87ae-2b055cb528ba" satisfied condition "success or failure" May 11 18:31:47.255: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-43cbe2ec-2509-440c-87ae-2b055cb528ba container client-container: STEP: delete the pod May 11 18:31:47.761: INFO: Waiting for pod downwardapi-volume-43cbe2ec-2509-440c-87ae-2b055cb528ba to disappear May 11 18:31:49.330: INFO: Pod downwardapi-volume-43cbe2ec-2509-440c-87ae-2b055cb528ba no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:31:49.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2176" for this suite. May 11 18:31:58.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:31:58.859: INFO: namespace projected-2176 deletion completed in 8.543349457s • [SLOW TEST:22.360 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:31:58.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 11 18:31:59.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1893' May 11 18:32:01.066: INFO: stderr: "" May 11 18:32:01.066: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 11 18:32:02.240: INFO: Selector matched 1 pods for map[app:redis] May 11 18:32:02.241: INFO: Found 0 / 1 May 11 18:32:03.090: INFO: Selector matched 1 pods for map[app:redis] May 11 18:32:03.090: INFO: Found 0 / 1 May 11 18:32:04.070: INFO: Selector matched 1 pods for map[app:redis] May 11 18:32:04.071: INFO: Found 0 / 1 May 11 18:32:05.121: INFO: Selector matched 1 pods for map[app:redis] May 11 18:32:05.121: INFO: Found 0 / 1 May 11 18:32:06.071: INFO: Selector matched 1 pods for map[app:redis] May 11 18:32:06.071: INFO: Found 0 / 1 May 11 18:32:07.415: INFO: Selector matched 1 pods for map[app:redis] May 11 18:32:07.415: INFO: Found 0 / 1 May 11 18:32:08.070: INFO: Selector matched 1 pods for map[app:redis] May 11 18:32:08.070: INFO: Found 0 / 1 May 11 18:32:09.071: INFO: Selector matched 1 pods for map[app:redis] May 11 18:32:09.071: INFO: Found 0 / 1 May 11 18:32:10.199: INFO: Selector matched 1 pods for map[app:redis] May 11 18:32:10.199: INFO: Found 0 / 1 May 11 18:32:11.070: INFO: Selector matched 1 pods for map[app:redis] May 11 18:32:11.070: INFO: Found 1 / 1 May 11 18:32:11.070: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 18:32:11.074: INFO: Selector matched 1 pods for map[app:redis] May 11 18:32:11.074: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 11 18:32:11.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5jtjt redis-master --namespace=kubectl-1893' May 11 18:32:11.190: INFO: stderr: "" May 11 18:32:11.190: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 May 18:32:09.655 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 May 18:32:09.655 # Server started, Redis version 3.2.12\n1:M 11 May 18:32:09.655 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 May 18:32:09.655 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 11 18:32:11.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5jtjt redis-master --namespace=kubectl-1893 --tail=1' May 11 18:32:11.279: INFO: stderr: "" May 11 18:32:11.279: INFO: stdout: "1:M 11 May 18:32:09.655 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 11 18:32:11.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5jtjt redis-master --namespace=kubectl-1893 --limit-bytes=1' May 11 18:32:11.385: INFO: stderr: "" May 11 18:32:11.385: INFO: stdout: " " STEP: exposing timestamps May 11 18:32:11.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5jtjt redis-master --namespace=kubectl-1893 --tail=1 --timestamps' May 11 18:32:11.504: INFO: stderr: "" May 11 18:32:11.504: INFO: stdout: "2020-05-11T18:32:09.655565079Z 1:M 11 May 18:32:09.655 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 11 18:32:14.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5jtjt redis-master --namespace=kubectl-1893 --since=1s' May 11 18:32:14.112: INFO: stderr: "" May 11 18:32:14.112: INFO: stdout: "" May 11 18:32:14.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5jtjt redis-master --namespace=kubectl-1893 --since=24h' May 11 18:32:14.209: INFO: stderr: "" May 11 18:32:14.209: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 May 18:32:09.655 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 May 18:32:09.655 # Server started, Redis version 3.2.12\n1:M 11 May 18:32:09.655 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 May 18:32:09.655 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 11 18:32:14.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1893' May 11 18:32:14.344: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 18:32:14.344: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 11 18:32:14.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1893' May 11 18:32:14.440: INFO: stderr: "No resources found.\n" May 11 18:32:14.440: INFO: stdout: "" May 11 18:32:14.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1893 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 18:32:14.913: INFO: stderr: "" May 11 18:32:14.913: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:32:14.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1893" for this suite. May 11 18:32:39.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:32:39.583: INFO: namespace kubectl-1893 deletion completed in 24.478752898s • [SLOW TEST:40.723 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:32:39.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 18:32:40.034: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 11 18:32:44.428: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:32:44.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-237" for this suite. May 11 18:32:59.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:33:00.113: INFO: namespace replication-controller-237 deletion completed in 15.406467797s • [SLOW TEST:20.531 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:33:00.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 11 18:33:00.799: INFO: Waiting up to 5m0s for pod "downward-api-5ffdbdd1-adaa-4109-8c17-8d17afbd2bbe" in namespace "downward-api-8302" to be "success or failure" May 11 18:33:00.956: INFO: Pod "downward-api-5ffdbdd1-adaa-4109-8c17-8d17afbd2bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 156.279844ms May 11 18:33:02.960: INFO: Pod "downward-api-5ffdbdd1-adaa-4109-8c17-8d17afbd2bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160415132s May 11 18:33:05.032: INFO: Pod "downward-api-5ffdbdd1-adaa-4109-8c17-8d17afbd2bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232001053s May 11 18:33:07.034: INFO: Pod "downward-api-5ffdbdd1-adaa-4109-8c17-8d17afbd2bbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.234466287s STEP: Saw pod success May 11 18:33:07.034: INFO: Pod "downward-api-5ffdbdd1-adaa-4109-8c17-8d17afbd2bbe" satisfied condition "success or failure" May 11 18:33:07.035: INFO: Trying to get logs from node iruya-worker2 pod downward-api-5ffdbdd1-adaa-4109-8c17-8d17afbd2bbe container dapi-container: STEP: delete the pod May 11 18:33:07.116: INFO: Waiting for pod downward-api-5ffdbdd1-adaa-4109-8c17-8d17afbd2bbe to disappear May 11 18:33:07.139: INFO: Pod downward-api-5ffdbdd1-adaa-4109-8c17-8d17afbd2bbe no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:33:07.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8302" for this suite. May 11 18:33:13.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:33:13.211: INFO: namespace downward-api-8302 deletion completed in 6.069259072s • [SLOW TEST:13.098 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:33:13.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 11 18:33:13.290: INFO: Waiting up to 5m0s for pod "downward-api-d16dbf4f-046b-42ef-9929-1e12bcab67a6" in namespace "downward-api-5229" to be "success or failure" May 11 18:33:13.305: INFO: Pod "downward-api-d16dbf4f-046b-42ef-9929-1e12bcab67a6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.161929ms May 11 18:33:15.595: INFO: Pod "downward-api-d16dbf4f-046b-42ef-9929-1e12bcab67a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305087931s May 11 18:33:17.599: INFO: Pod "downward-api-d16dbf4f-046b-42ef-9929-1e12bcab67a6": Phase="Running", Reason="", readiness=true. Elapsed: 4.309187137s May 11 18:33:19.602: INFO: Pod "downward-api-d16dbf4f-046b-42ef-9929-1e12bcab67a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.312321397s STEP: Saw pod success May 11 18:33:19.602: INFO: Pod "downward-api-d16dbf4f-046b-42ef-9929-1e12bcab67a6" satisfied condition "success or failure" May 11 18:33:19.605: INFO: Trying to get logs from node iruya-worker2 pod downward-api-d16dbf4f-046b-42ef-9929-1e12bcab67a6 container dapi-container: STEP: delete the pod May 11 18:33:20.282: INFO: Waiting for pod downward-api-d16dbf4f-046b-42ef-9929-1e12bcab67a6 to disappear May 11 18:33:20.287: INFO: Pod downward-api-d16dbf4f-046b-42ef-9929-1e12bcab67a6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:33:20.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5229" for this suite. May 11 18:33:30.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:33:30.827: INFO: namespace downward-api-5229 deletion completed in 10.537439857s • [SLOW TEST:17.615 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:33:30.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:33:44.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2651" for this suite. May 11 18:33:52.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:33:52.653: INFO: namespace namespaces-2651 deletion completed in 8.103122606s STEP: Destroying namespace "nsdeletetest-356" for this suite. May 11 18:33:52.655: INFO: Namespace nsdeletetest-356 was already deleted STEP: Destroying namespace "nsdeletetest-2069" for this suite. May 11 18:33:58.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:33:59.084: INFO: namespace nsdeletetest-2069 deletion completed in 6.429410277s • [SLOW TEST:28.257 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:33:59.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 18:34:01.463: INFO: Create a RollingUpdate DaemonSet May 11 18:34:01.467: INFO: Check that daemon pods launch on every node of the cluster May 11 18:34:01.908: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:01.911: INFO: Number of nodes with available pods: 0 May 11 18:34:01.911: INFO: Node iruya-worker is running more than one daemon pod May 11 18:34:03.655: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:03.657: INFO: Number of nodes with available pods: 0 May 11 18:34:03.657: INFO: Node iruya-worker is running more than one daemon pod May 11 18:34:04.051: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:04.053: INFO: Number of nodes with available pods: 0 May 11 18:34:04.053: INFO: Node iruya-worker is running more than one daemon pod May 11 18:34:05.224: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:05.595: INFO: Number of nodes with available pods: 0 May 11 18:34:05.596: INFO: Node iruya-worker is running more than one daemon pod May 11 18:34:05.916: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:05.922: INFO: Number of nodes with available pods: 0 May 11 18:34:05.922: INFO: Node iruya-worker is running more than one daemon pod May 11 18:34:07.010: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:07.080: INFO: Number of nodes with available pods: 0 May 11 18:34:07.080: INFO: Node iruya-worker is running more than one daemon pod May 11 18:34:07.997: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:08.171: INFO: Number of nodes with available pods: 0 May 11 18:34:08.171: INFO: Node iruya-worker is running more than one daemon pod May 11 18:34:08.915: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:09.375: INFO: Number of nodes with available pods: 1 May 11 18:34:09.375: INFO: Node iruya-worker is running more than one daemon pod May 11 18:34:09.915: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:09.918: INFO: Number of nodes with available pods: 2 May 11 18:34:09.918: INFO: Number of running nodes: 2, number of available pods: 2 May 11 18:34:09.918: INFO: Update the DaemonSet to trigger a rollout May 11 18:34:09.924: INFO: Updating DaemonSet daemon-set May 11 18:34:23.394: INFO: Roll back the DaemonSet before rollout is complete May 11 18:34:23.400: INFO: Updating DaemonSet daemon-set May 11 18:34:23.400: INFO: Make sure DaemonSet rollback is complete May 11 18:34:23.620: INFO: Wrong image for pod: daemon-set-v7hml. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 11 18:34:23.620: INFO: Pod daemon-set-v7hml is not available May 11 18:34:23.624: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:24.627: INFO: Wrong image for pod: daemon-set-v7hml. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 11 18:34:24.627: INFO: Pod daemon-set-v7hml is not available May 11 18:34:24.630: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:25.627: INFO: Wrong image for pod: daemon-set-v7hml. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 11 18:34:25.627: INFO: Pod daemon-set-v7hml is not available May 11 18:34:25.629: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:26.627: INFO: Wrong image for pod: daemon-set-v7hml. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 11 18:34:26.627: INFO: Pod daemon-set-v7hml is not available May 11 18:34:26.630: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:28.208: INFO: Pod daemon-set-txzb8 is not available May 11 18:34:28.496: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:34:28.628: INFO: Pod daemon-set-txzb8 is not available May 11 18:34:28.632: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9157, will wait for the garbage collector to delete the pods May 11 18:34:28.767: INFO: Deleting DaemonSet.extensions daemon-set took: 75.001825ms May 11 18:34:29.067: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.311186ms May 11 18:34:42.416: INFO: Number of nodes with available pods: 0 May 11 18:34:42.416: INFO: Number of running nodes: 0, number of available pods: 0 May 11 18:34:42.418: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9157/daemonsets","resourceVersion":"10301234"},"items":null} May 11 18:34:42.722: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9157/pods","resourceVersion":"10301235"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:34:42.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9157" for this suite. May 11 18:34:57.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:34:58.072: INFO: namespace daemonsets-9157 deletion completed in 15.117301415s • [SLOW TEST:58.988 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:34:58.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:34:59.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2849" for this suite. May 11 18:35:24.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:35:24.426: INFO: namespace pods-2849 deletion completed in 24.678919111s • [SLOW TEST:26.354 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:35:24.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 11 18:35:24.489: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:35:39.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-989" for this suite. May 11 18:35:46.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:35:46.519: INFO: namespace init-container-989 deletion completed in 6.663176038s • [SLOW TEST:22.093 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:35:46.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6184 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 18:35:49.908: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 18:36:25.858: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.236:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6184 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:36:25.858: INFO: >>> kubeConfig: /root/.kube/config I0511 18:36:25.886594 7 log.go:172] (0xc0012528f0) (0xc0026f9a40) Create stream I0511 18:36:25.886620 7 log.go:172] (0xc0012528f0) (0xc0026f9a40) Stream added, broadcasting: 1 I0511 18:36:25.888504 7 log.go:172] (0xc0012528f0) Reply frame received for 1 I0511 18:36:25.888546 7 log.go:172] (0xc0012528f0) (0xc001236140) Create stream I0511 18:36:25.888561 7 log.go:172] (0xc0012528f0) (0xc001236140) Stream added, broadcasting: 3 I0511 18:36:25.890068 7 log.go:172] (0xc0012528f0) Reply frame received for 3 I0511 18:36:25.890114 7 log.go:172] (0xc0012528f0) (0xc0026f9ae0) Create stream I0511 18:36:25.890130 7 log.go:172] (0xc0012528f0) (0xc0026f9ae0) Stream added, broadcasting: 5 I0511 18:36:25.891152 7 log.go:172] (0xc0012528f0) Reply frame received for 5 I0511 18:36:25.971199 7 log.go:172] (0xc0012528f0) Data frame received for 3 I0511 18:36:25.971251 7 log.go:172] (0xc001236140) (3) Data frame handling I0511 18:36:25.971270 7 log.go:172] (0xc001236140) (3) Data frame sent I0511 18:36:25.971312 7 log.go:172] (0xc0012528f0) Data frame received for 5 I0511 18:36:25.971328 7 log.go:172] (0xc0026f9ae0) (5) Data frame handling I0511 18:36:25.971372 7 log.go:172] (0xc0012528f0) Data frame received for 3 I0511 18:36:25.971429 7 log.go:172] (0xc001236140) (3) Data frame handling I0511 18:36:25.973383 7 log.go:172] (0xc0012528f0) Data frame received for 1 I0511 18:36:25.973431 7 log.go:172] (0xc0026f9a40) (1) Data frame handling I0511 18:36:25.973474 7 log.go:172] (0xc0026f9a40) (1) Data frame sent I0511 18:36:25.973503 7 log.go:172] (0xc0012528f0) (0xc0026f9a40) Stream removed, broadcasting: 1 I0511 18:36:25.973545 7 log.go:172] (0xc0012528f0) Go away received I0511 18:36:25.973763 7 log.go:172] (0xc0012528f0) (0xc0026f9a40) Stream removed, broadcasting: 1 I0511 18:36:25.973795 7 log.go:172] (0xc0012528f0) (0xc001236140) Stream removed, broadcasting: 3 I0511 18:36:25.973813 7 log.go:172] (0xc0012528f0) (0xc0026f9ae0) Stream removed, broadcasting: 5 May 11 18:36:25.973: INFO: Found all expected endpoints: [netserver-0] May 11 18:36:25.977: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.185:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6184 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:36:25.977: INFO: >>> kubeConfig: /root/.kube/config I0511 18:36:26.012878 7 log.go:172] (0xc001c2c9a0) (0xc001236b40) Create stream I0511 18:36:26.012904 7 log.go:172] (0xc001c2c9a0) (0xc001236b40) Stream added, broadcasting: 1 I0511 18:36:26.014926 7 log.go:172] (0xc001c2c9a0) Reply frame received for 1 I0511 18:36:26.014967 7 log.go:172] (0xc001c2c9a0) (0xc001845ea0) Create stream I0511 18:36:26.014981 7 log.go:172] (0xc001c2c9a0) (0xc001845ea0) Stream added, broadcasting: 3 I0511 18:36:26.016115 7 log.go:172] (0xc001c2c9a0) Reply frame received for 3 I0511 18:36:26.016176 7 log.go:172] (0xc001c2c9a0) (0xc001236c80) Create stream I0511 18:36:26.016193 7 log.go:172] (0xc001c2c9a0) (0xc001236c80) Stream added, broadcasting: 5 I0511 18:36:26.017326 7 log.go:172] (0xc001c2c9a0) Reply frame received for 5 I0511 18:36:26.081755 7 log.go:172] (0xc001c2c9a0) Data frame received for 3 I0511 18:36:26.081810 7 log.go:172] (0xc001845ea0) (3) Data frame handling I0511 18:36:26.081857 7 log.go:172] (0xc001845ea0) (3) Data frame sent I0511 18:36:26.081878 7 log.go:172] (0xc001c2c9a0) Data frame received for 3 I0511 18:36:26.081896 7 log.go:172] (0xc001845ea0) (3) Data frame handling I0511 18:36:26.081988 7 log.go:172] (0xc001c2c9a0) Data frame received for 5 I0511 18:36:26.082011 7 log.go:172] (0xc001236c80) (5) Data frame handling I0511 18:36:26.082999 7 log.go:172] (0xc001c2c9a0) Data frame received for 1 I0511 18:36:26.083072 7 log.go:172] (0xc001236b40) (1) Data frame handling I0511 18:36:26.083113 7 log.go:172] (0xc001236b40) (1) Data frame sent I0511 18:36:26.083145 7 log.go:172] (0xc001c2c9a0) (0xc001236b40) Stream removed, broadcasting: 1 I0511 18:36:26.083243 7 log.go:172] (0xc001c2c9a0) Go away received I0511 18:36:26.083283 7 log.go:172] (0xc001c2c9a0) (0xc001236b40) Stream removed, broadcasting: 1 I0511 18:36:26.083339 7 log.go:172] (0xc001c2c9a0) (0xc001845ea0) Stream removed, broadcasting: 3 I0511 18:36:26.083369 7 log.go:172] (0xc001c2c9a0) (0xc001236c80) Stream removed, broadcasting: 5 May 11 18:36:26.083: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:36:26.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6184" for this suite. May 11 18:36:55.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:36:55.284: INFO: namespace pod-network-test-6184 deletion completed in 29.197032071s • [SLOW TEST:68.764 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:36:55.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 11 18:36:56.400: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1599,SelfLink:/api/v1/namespaces/watch-1599/configmaps/e2e-watch-test-resource-version,UID:71a504a1-7eae-45cc-bdd8-a319db0010f3,ResourceVersion:10301635,Generation:0,CreationTimestamp:2020-05-11 18:36:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 18:36:56.400: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1599,SelfLink:/api/v1/namespaces/watch-1599/configmaps/e2e-watch-test-resource-version,UID:71a504a1-7eae-45cc-bdd8-a319db0010f3,ResourceVersion:10301636,Generation:0,CreationTimestamp:2020-05-11 18:36:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:36:56.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1599" for this suite. May 11 18:37:02.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:37:03.005: INFO: namespace watch-1599 deletion completed in 6.178096962s • [SLOW TEST:7.721 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:37:03.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 18:37:17.554: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:17.652: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:19.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:19.657: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:21.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:21.656: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:23.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:23.767: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:25.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:25.927: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:27.653: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:27.655: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:29.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:29.657: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:31.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:31.656: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:33.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:33.656: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:35.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:35.655: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:37.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:37.656: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:39.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:39.656: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:41.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:41.656: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:37:43.652: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:37:43.802: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:37:43.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4676" for this suite. May 11 18:38:08.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:38:08.299: INFO: namespace container-lifecycle-hook-4676 deletion completed in 24.13189498s • [SLOW TEST:65.294 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:38:08.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:38:18.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8420" for this suite. May 11 18:38:27.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:38:27.280: INFO: namespace watch-8420 deletion completed in 8.309306521s • [SLOW TEST:18.981 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:38:27.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 18:38:27.739: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d069430-5a81-4502-8da7-4b299b9e36d7" in namespace "downward-api-1130" to be "success or failure" May 11 18:38:27.797: INFO: Pod "downwardapi-volume-5d069430-5a81-4502-8da7-4b299b9e36d7": Phase="Pending", Reason="", readiness=false. Elapsed: 58.205003ms May 11 18:38:29.800: INFO: Pod "downwardapi-volume-5d069430-5a81-4502-8da7-4b299b9e36d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061275721s May 11 18:38:31.804: INFO: Pod "downwardapi-volume-5d069430-5a81-4502-8da7-4b299b9e36d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065567263s May 11 18:38:33.808: INFO: Pod "downwardapi-volume-5d069430-5a81-4502-8da7-4b299b9e36d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069636146s STEP: Saw pod success May 11 18:38:33.808: INFO: Pod "downwardapi-volume-5d069430-5a81-4502-8da7-4b299b9e36d7" satisfied condition "success or failure" May 11 18:38:33.811: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-5d069430-5a81-4502-8da7-4b299b9e36d7 container client-container: STEP: delete the pod May 11 18:38:34.095: INFO: Waiting for pod downwardapi-volume-5d069430-5a81-4502-8da7-4b299b9e36d7 to disappear May 11 18:38:34.144: INFO: Pod downwardapi-volume-5d069430-5a81-4502-8da7-4b299b9e36d7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:38:34.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1130" for this suite. May 11 18:38:40.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:38:40.332: INFO: namespace downward-api-1130 deletion completed in 6.183609371s • [SLOW TEST:13.050 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:38:40.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 11 18:38:40.748: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:38:52.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2818" for this suite. May 11 18:38:58.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:38:58.534: INFO: namespace pods-2818 deletion completed in 6.102738525s • [SLOW TEST:18.202 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:38:58.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1220c5b4-f036-4e48-ae95-bf637cd0ef35 STEP: Creating a pod to test consume configMaps May 11 18:38:58.658: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6d6fd2bf-491c-4b7d-ae79-c7e71171f659" in namespace "projected-5923" to be "success or failure" May 11 18:38:58.674: INFO: Pod "pod-projected-configmaps-6d6fd2bf-491c-4b7d-ae79-c7e71171f659": Phase="Pending", Reason="", readiness=false. Elapsed: 16.353913ms May 11 18:39:00.678: INFO: Pod "pod-projected-configmaps-6d6fd2bf-491c-4b7d-ae79-c7e71171f659": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019611227s May 11 18:39:02.682: INFO: Pod "pod-projected-configmaps-6d6fd2bf-491c-4b7d-ae79-c7e71171f659": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023604252s May 11 18:39:04.816: INFO: Pod "pod-projected-configmaps-6d6fd2bf-491c-4b7d-ae79-c7e71171f659": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157595841s STEP: Saw pod success May 11 18:39:04.816: INFO: Pod "pod-projected-configmaps-6d6fd2bf-491c-4b7d-ae79-c7e71171f659" satisfied condition "success or failure" May 11 18:39:04.819: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-6d6fd2bf-491c-4b7d-ae79-c7e71171f659 container projected-configmap-volume-test: STEP: delete the pod May 11 18:39:05.513: INFO: Waiting for pod pod-projected-configmaps-6d6fd2bf-491c-4b7d-ae79-c7e71171f659 to disappear May 11 18:39:05.805: INFO: Pod pod-projected-configmaps-6d6fd2bf-491c-4b7d-ae79-c7e71171f659 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:39:05.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5923" for this suite. May 11 18:39:11.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:39:11.932: INFO: namespace projected-5923 deletion completed in 6.12286137s • [SLOW TEST:13.398 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:39:11.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-796.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-796.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-796.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-796.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-796.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.179.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.179.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.179.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.179.246_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-796.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-796.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-796.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-796.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-796.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-796.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-796.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.179.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.179.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.179.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.179.246_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 18:39:22.242: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:22.246: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:22.249: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:22.251: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:22.266: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:22.268: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:22.270: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:22.273: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:22.287: INFO: Lookups using dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4 failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local] May 11 18:39:27.293: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:27.296: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:27.297: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:27.299: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:27.314: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:27.316: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:27.318: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:27.320: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:27.335: INFO: Lookups using dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4 failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local] May 11 18:39:32.402: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:32.565: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:32.569: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:32.572: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:32.592: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:32.594: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:32.598: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:32.601: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:32.619: INFO: Lookups using dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4 failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local] May 11 18:39:37.291: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:37.295: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:37.298: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:37.301: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:37.360: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:37.362: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:37.364: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:37.367: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:37.379: INFO: Lookups using dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4 failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local] May 11 18:39:42.290: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:42.292: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:42.294: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:42.296: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:42.311: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:42.314: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:42.316: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:42.319: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:42.331: INFO: Lookups using dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4 failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local] May 11 18:39:47.290: INFO: Unable to read wheezy_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:47.293: INFO: Unable to read wheezy_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:47.295: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:47.297: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:47.313: INFO: Unable to read jessie_udp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:47.316: INFO: Unable to read jessie_tcp@dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:47.319: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:47.322: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local from pod dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4: the server could not find the requested resource (get pods dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4) May 11 18:39:47.334: INFO: Lookups using dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4 failed for: [wheezy_udp@dns-test-service.dns-796.svc.cluster.local wheezy_tcp@dns-test-service.dns-796.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_udp@dns-test-service.dns-796.svc.cluster.local jessie_tcp@dns-test-service.dns-796.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-796.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-796.svc.cluster.local] May 11 18:39:52.322: INFO: DNS probes using dns-796/dns-test-a3e851d0-67b9-43f8-9c71-bf1fb58222b4 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:39:53.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-796" for this suite. May 11 18:39:59.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:39:59.840: INFO: namespace dns-796 deletion completed in 6.167460077s • [SLOW TEST:47.907 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:39:59.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 11 18:40:04.814: INFO: Successfully updated pod "annotationupdate64508da2-02dc-4cc5-a5b7-ce70b0e6318f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:40:06.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4706" for this suite. May 11 18:40:31.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:40:31.168: INFO: namespace projected-4706 deletion completed in 24.305669046s • [SLOW TEST:31.327 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:40:31.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1740, will wait for the garbage collector to delete the pods May 11 18:40:39.346: INFO: Deleting Job.batch foo took: 61.291363ms May 11 18:40:39.847: INFO: Terminating Job.batch foo pods took: 500.217732ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:41:22.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1740" for this suite. May 11 18:41:30.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:41:30.624: INFO: namespace job-1740 deletion completed in 8.324244843s • [SLOW TEST:59.455 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:41:30.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-e27c4ec4-7543-482e-82bd-7150b2849981 STEP: Creating secret with name s-test-opt-upd-ff8dff6a-5f0e-4c93-b67d-df9ef7eb517d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e27c4ec4-7543-482e-82bd-7150b2849981 STEP: Updating secret s-test-opt-upd-ff8dff6a-5f0e-4c93-b67d-df9ef7eb517d STEP: Creating secret with name s-test-opt-create-bf8a92a7-0db4-4d8d-a836-fdc0de3e0a39 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:43:25.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1001" for this suite. May 11 18:43:52.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:43:52.383: INFO: namespace secrets-1001 deletion completed in 26.925417729s • [SLOW TEST:141.759 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:43:52.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-4274 I0511 18:43:53.532916 7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4274, replica count: 1 I0511 18:43:54.583302 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:43:55.583473 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:43:56.583714 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:43:57.583976 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:43:58.584217 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:43:59.584425 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:44:00.584638 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:44:01.584831 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:44:02.585061 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:44:03.585310 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 18:44:04.312: INFO: Created: latency-svc-n5qpx May 11 18:44:04.348: INFO: Got endpoints: latency-svc-n5qpx [662.526149ms] May 11 18:44:05.300: INFO: Created: latency-svc-78npq May 11 18:44:05.348: INFO: Got endpoints: latency-svc-78npq [1.000367274s] May 11 18:44:05.600: INFO: Created: latency-svc-nw96l May 11 18:44:05.952: INFO: Got endpoints: latency-svc-nw96l [1.604247453s] May 11 18:44:06.719: INFO: Created: latency-svc-gpk7s May 11 18:44:06.738: INFO: Got endpoints: latency-svc-gpk7s [2.390266015s] May 11 18:44:06.793: INFO: Created: latency-svc-qzqzx May 11 18:44:06.811: INFO: Got endpoints: latency-svc-qzqzx [2.463061404s] May 11 18:44:06.948: INFO: Created: latency-svc-msr8v May 11 18:44:06.951: INFO: Got endpoints: latency-svc-msr8v [2.603408366s] May 11 18:44:07.162: INFO: Created: latency-svc-5xkxz May 11 18:44:07.194: INFO: Got endpoints: latency-svc-5xkxz [2.846592887s] May 11 18:44:07.552: INFO: Created: latency-svc-8ftx6 May 11 18:44:07.750: INFO: Got endpoints: latency-svc-8ftx6 [3.401935354s] May 11 18:44:08.002: INFO: Created: latency-svc-6dstr May 11 18:44:08.012: INFO: Got endpoints: latency-svc-6dstr [3.664355578s] May 11 18:44:08.144: INFO: Created: latency-svc-6zwzj May 11 18:44:08.535: INFO: Got endpoints: latency-svc-6zwzj [4.187056375s] May 11 18:44:08.538: INFO: Created: latency-svc-bmhlg May 11 18:44:08.562: INFO: Got endpoints: latency-svc-bmhlg [4.213825515s] May 11 18:44:08.755: INFO: Created: latency-svc-x5pjx May 11 18:44:08.788: INFO: Got endpoints: latency-svc-x5pjx [4.440498604s] May 11 18:44:08.959: INFO: Created: latency-svc-x95sj May 11 18:44:08.999: INFO: Got endpoints: latency-svc-x95sj [4.651619834s] May 11 18:44:09.042: INFO: Created: latency-svc-qpfj9 May 11 18:44:09.210: INFO: Got endpoints: latency-svc-qpfj9 [4.861816059s] May 11 18:44:09.216: INFO: Created: latency-svc-5g5m7 May 11 18:44:09.271: INFO: Got endpoints: latency-svc-5g5m7 [4.922708281s] May 11 18:44:09.366: INFO: Created: latency-svc-tjz9j May 11 18:44:09.407: INFO: Got endpoints: latency-svc-tjz9j [5.059523236s] May 11 18:44:09.450: INFO: Created: latency-svc-wtmw5 May 11 18:44:09.462: INFO: Got endpoints: latency-svc-wtmw5 [4.114249192s] May 11 18:44:09.529: INFO: Created: latency-svc-jd669 May 11 18:44:09.569: INFO: Got endpoints: latency-svc-jd669 [3.617287764s] May 11 18:44:09.679: INFO: Created: latency-svc-zgv8t May 11 18:44:09.681: INFO: Got endpoints: latency-svc-zgv8t [2.94286752s] May 11 18:44:09.720: INFO: Created: latency-svc-9rjlz May 11 18:44:09.740: INFO: Got endpoints: latency-svc-9rjlz [2.929516676s] May 11 18:44:09.906: INFO: Created: latency-svc-js7zh May 11 18:44:09.909: INFO: Got endpoints: latency-svc-js7zh [2.958043383s] May 11 18:44:10.213: INFO: Created: latency-svc-pp795 May 11 18:44:10.370: INFO: Got endpoints: latency-svc-pp795 [3.175413843s] May 11 18:44:10.417: INFO: Created: latency-svc-tzwhb May 11 18:44:10.491: INFO: Got endpoints: latency-svc-tzwhb [2.741155062s] May 11 18:44:10.807: INFO: Created: latency-svc-cxdnb May 11 18:44:11.042: INFO: Got endpoints: latency-svc-cxdnb [3.029644172s] May 11 18:44:11.057: INFO: Created: latency-svc-tlfsm May 11 18:44:11.071: INFO: Got endpoints: latency-svc-tlfsm [2.536639439s] May 11 18:44:11.442: INFO: Created: latency-svc-dn25m May 11 18:44:11.648: INFO: Got endpoints: latency-svc-dn25m [3.086539199s] May 11 18:44:11.852: INFO: Created: latency-svc-mxp2q May 11 18:44:12.219: INFO: Created: latency-svc-sjkfl May 11 18:44:12.220: INFO: Got endpoints: latency-svc-mxp2q [3.431746543s] May 11 18:44:12.575: INFO: Got endpoints: latency-svc-sjkfl [926.929291ms] May 11 18:44:12.664: INFO: Created: latency-svc-x2n9c May 11 18:44:13.474: INFO: Got endpoints: latency-svc-x2n9c [4.474212332s] May 11 18:44:13.811: INFO: Created: latency-svc-7gcrl May 11 18:44:14.122: INFO: Got endpoints: latency-svc-7gcrl [4.912285181s] May 11 18:44:14.576: INFO: Created: latency-svc-mbwdl May 11 18:44:14.904: INFO: Got endpoints: latency-svc-mbwdl [5.633782658s] May 11 18:44:14.906: INFO: Created: latency-svc-w6bd6 May 11 18:44:15.275: INFO: Got endpoints: latency-svc-w6bd6 [5.86780306s] May 11 18:44:15.916: INFO: Created: latency-svc-4g7n4 May 11 18:44:16.146: INFO: Created: latency-svc-szlqd May 11 18:44:16.147: INFO: Got endpoints: latency-svc-4g7n4 [6.684241065s] May 11 18:44:16.490: INFO: Got endpoints: latency-svc-szlqd [6.92115609s] May 11 18:44:16.766: INFO: Created: latency-svc-w45mm May 11 18:44:16.868: INFO: Got endpoints: latency-svc-w45mm [7.186734329s] May 11 18:44:17.024: INFO: Created: latency-svc-9dfm7 May 11 18:44:17.082: INFO: Got endpoints: latency-svc-9dfm7 [7.341821308s] May 11 18:44:17.369: INFO: Created: latency-svc-ckdxw May 11 18:44:17.418: INFO: Got endpoints: latency-svc-ckdxw [7.508645332s] May 11 18:44:17.629: INFO: Created: latency-svc-w8ndw May 11 18:44:17.663: INFO: Got endpoints: latency-svc-w8ndw [7.293084693s] May 11 18:44:17.832: INFO: Created: latency-svc-j2z49 May 11 18:44:17.834: INFO: Got endpoints: latency-svc-j2z49 [7.343256949s] May 11 18:44:17.929: INFO: Created: latency-svc-6zvh4 May 11 18:44:18.043: INFO: Got endpoints: latency-svc-6zvh4 [7.000754506s] May 11 18:44:18.050: INFO: Created: latency-svc-8c5l9 May 11 18:44:18.078: INFO: Got endpoints: latency-svc-8c5l9 [7.006409938s] May 11 18:44:18.198: INFO: Created: latency-svc-nx8g6 May 11 18:44:18.247: INFO: Got endpoints: latency-svc-nx8g6 [6.02723823s] May 11 18:44:18.402: INFO: Created: latency-svc-h2tqg May 11 18:44:18.416: INFO: Got endpoints: latency-svc-h2tqg [5.840674896s] May 11 18:44:18.500: INFO: Created: latency-svc-w7vl6 May 11 18:44:18.664: INFO: Got endpoints: latency-svc-w7vl6 [5.190714817s] May 11 18:44:18.705: INFO: Created: latency-svc-2l5rl May 11 18:44:18.763: INFO: Got endpoints: latency-svc-2l5rl [4.641118547s] May 11 18:44:18.903: INFO: Created: latency-svc-v6gl7 May 11 18:44:18.949: INFO: Got endpoints: latency-svc-v6gl7 [4.044939082s] May 11 18:44:19.145: INFO: Created: latency-svc-9hzxb May 11 18:44:19.214: INFO: Got endpoints: latency-svc-9hzxb [3.93904934s] May 11 18:44:19.217: INFO: Created: latency-svc-7s4l2 May 11 18:44:19.347: INFO: Got endpoints: latency-svc-7s4l2 [3.200870617s] May 11 18:44:19.404: INFO: Created: latency-svc-qh8mp May 11 18:44:19.423: INFO: Got endpoints: latency-svc-qh8mp [2.932853278s] May 11 18:44:19.542: INFO: Created: latency-svc-w7trs May 11 18:44:19.573: INFO: Got endpoints: latency-svc-w7trs [2.705588246s] May 11 18:44:19.767: INFO: Created: latency-svc-qz2fc May 11 18:44:19.807: INFO: Got endpoints: latency-svc-qz2fc [2.725106786s] May 11 18:44:19.971: INFO: Created: latency-svc-nszm6 May 11 18:44:20.005: INFO: Got endpoints: latency-svc-nszm6 [2.587265675s] May 11 18:44:20.042: INFO: Created: latency-svc-vfcrn May 11 18:44:20.066: INFO: Got endpoints: latency-svc-vfcrn [2.402941561s] May 11 18:44:20.161: INFO: Created: latency-svc-prp46 May 11 18:44:20.188: INFO: Got endpoints: latency-svc-prp46 [2.353322095s] May 11 18:44:20.227: INFO: Created: latency-svc-8cnhk May 11 18:44:20.293: INFO: Got endpoints: latency-svc-8cnhk [2.250696532s] May 11 18:44:20.347: INFO: Created: latency-svc-v9n2n May 11 18:44:20.362: INFO: Got endpoints: latency-svc-v9n2n [2.28429408s] May 11 18:44:20.389: INFO: Created: latency-svc-zc852 May 11 18:44:20.437: INFO: Got endpoints: latency-svc-zc852 [2.189600087s] May 11 18:44:20.461: INFO: Created: latency-svc-qxl8n May 11 18:44:20.477: INFO: Got endpoints: latency-svc-qxl8n [2.06095437s] May 11 18:44:20.533: INFO: Created: latency-svc-2849b May 11 18:44:20.575: INFO: Got endpoints: latency-svc-2849b [1.910318221s] May 11 18:44:20.605: INFO: Created: latency-svc-lmg2z May 11 18:44:20.622: INFO: Got endpoints: latency-svc-lmg2z [1.858313407s] May 11 18:44:20.648: INFO: Created: latency-svc-2f8hs May 11 18:44:20.666: INFO: Got endpoints: latency-svc-2f8hs [1.71689292s] May 11 18:44:20.719: INFO: Created: latency-svc-s9vrj May 11 18:44:20.731: INFO: Got endpoints: latency-svc-s9vrj [1.516358603s] May 11 18:44:20.755: INFO: Created: latency-svc-lsndq May 11 18:44:20.791: INFO: Got endpoints: latency-svc-lsndq [1.443113811s] May 11 18:44:20.863: INFO: Created: latency-svc-dk7jp May 11 18:44:20.875: INFO: Got endpoints: latency-svc-dk7jp [1.45134488s] May 11 18:44:20.923: INFO: Created: latency-svc-9gt92 May 11 18:44:20.959: INFO: Got endpoints: latency-svc-9gt92 [1.385275398s] May 11 18:44:21.008: INFO: Created: latency-svc-4g9zk May 11 18:44:21.027: INFO: Got endpoints: latency-svc-4g9zk [1.219252112s] May 11 18:44:21.055: INFO: Created: latency-svc-42x45 May 11 18:44:21.068: INFO: Got endpoints: latency-svc-42x45 [1.062897679s] May 11 18:44:21.091: INFO: Created: latency-svc-klsph May 11 18:44:21.150: INFO: Got endpoints: latency-svc-klsph [1.084173833s] May 11 18:44:21.170: INFO: Created: latency-svc-tcgx7 May 11 18:44:21.190: INFO: Got endpoints: latency-svc-tcgx7 [1.002040262s] May 11 18:44:21.239: INFO: Created: latency-svc-5v64f May 11 18:44:21.371: INFO: Got endpoints: latency-svc-5v64f [1.077738317s] May 11 18:44:21.373: INFO: Created: latency-svc-t8z9j May 11 18:44:21.412: INFO: Got endpoints: latency-svc-t8z9j [1.049420102s] May 11 18:44:21.452: INFO: Created: latency-svc-62m5f May 11 18:44:21.599: INFO: Got endpoints: latency-svc-62m5f [1.161462734s] May 11 18:44:21.607: INFO: Created: latency-svc-ftqc8 May 11 18:44:21.676: INFO: Got endpoints: latency-svc-ftqc8 [1.199104563s] May 11 18:44:21.827: INFO: Created: latency-svc-89h4x May 11 18:44:21.899: INFO: Got endpoints: latency-svc-89h4x [1.323786969s] May 11 18:44:22.067: INFO: Created: latency-svc-s9csk May 11 18:44:22.121: INFO: Got endpoints: latency-svc-s9csk [1.499095629s] May 11 18:44:22.904: INFO: Created: latency-svc-t22p2 May 11 18:44:23.455: INFO: Got endpoints: latency-svc-t22p2 [2.7887984s] May 11 18:44:23.893: INFO: Created: latency-svc-9fs4s May 11 18:44:23.893: INFO: Created: latency-svc-j4zjf May 11 18:44:24.072: INFO: Got endpoints: latency-svc-9fs4s [3.280911507s] May 11 18:44:24.072: INFO: Got endpoints: latency-svc-j4zjf [3.341275019s] May 11 18:44:24.145: INFO: Created: latency-svc-f7qzd May 11 18:44:24.317: INFO: Got endpoints: latency-svc-f7qzd [3.442614025s] May 11 18:44:24.367: INFO: Created: latency-svc-wghzp May 11 18:44:24.384: INFO: Got endpoints: latency-svc-wghzp [3.425500763s] May 11 18:44:24.479: INFO: Created: latency-svc-fzczq May 11 18:44:24.482: INFO: Got endpoints: latency-svc-fzczq [3.455213759s] May 11 18:44:24.517: INFO: Created: latency-svc-g5hzx May 11 18:44:24.535: INFO: Got endpoints: latency-svc-g5hzx [3.46704294s] May 11 18:44:24.579: INFO: Created: latency-svc-hspsn May 11 18:44:24.706: INFO: Got endpoints: latency-svc-hspsn [3.555954199s] May 11 18:44:24.734: INFO: Created: latency-svc-l8smf May 11 18:44:24.758: INFO: Got endpoints: latency-svc-l8smf [3.568028369s] May 11 18:44:24.870: INFO: Created: latency-svc-qvg55 May 11 18:44:24.875: INFO: Got endpoints: latency-svc-qvg55 [3.503953194s] May 11 18:44:24.962: INFO: Created: latency-svc-4pjl2 May 11 18:44:25.339: INFO: Got endpoints: latency-svc-4pjl2 [3.927341666s] May 11 18:44:25.663: INFO: Created: latency-svc-8v9pr May 11 18:44:25.852: INFO: Got endpoints: latency-svc-8v9pr [4.252900549s] May 11 18:44:25.932: INFO: Created: latency-svc-s8hz2 May 11 18:44:26.623: INFO: Got endpoints: latency-svc-s8hz2 [4.946906115s] May 11 18:44:26.821: INFO: Created: latency-svc-wvrfq May 11 18:44:26.833: INFO: Got endpoints: latency-svc-wvrfq [4.934189747s] May 11 18:44:27.234: INFO: Created: latency-svc-78jsd May 11 18:44:27.479: INFO: Got endpoints: latency-svc-78jsd [5.358092056s] May 11 18:44:27.835: INFO: Created: latency-svc-f6v26 May 11 18:44:27.869: INFO: Got endpoints: latency-svc-f6v26 [4.413941864s] May 11 18:44:28.256: INFO: Created: latency-svc-rv7fn May 11 18:44:28.303: INFO: Got endpoints: latency-svc-rv7fn [4.231358862s] May 11 18:44:28.425: INFO: Created: latency-svc-89fgn May 11 18:44:28.482: INFO: Got endpoints: latency-svc-89fgn [4.410447627s] May 11 18:44:28.671: INFO: Created: latency-svc-jrsls May 11 18:44:28.679: INFO: Got endpoints: latency-svc-jrsls [4.361538334s] May 11 18:44:28.745: INFO: Created: latency-svc-zscpb May 11 18:44:28.868: INFO: Got endpoints: latency-svc-zscpb [4.483933672s] May 11 18:44:29.174: INFO: Created: latency-svc-tlc96 May 11 18:44:29.243: INFO: Got endpoints: latency-svc-tlc96 [4.760925722s] May 11 18:44:29.499: INFO: Created: latency-svc-jpzrg May 11 18:44:29.869: INFO: Got endpoints: latency-svc-jpzrg [5.333097986s] May 11 18:44:30.639: INFO: Created: latency-svc-2wbq6 May 11 18:44:30.690: INFO: Got endpoints: latency-svc-2wbq6 [5.983660871s] May 11 18:44:30.972: INFO: Created: latency-svc-dqk2w May 11 18:44:31.017: INFO: Got endpoints: latency-svc-dqk2w [6.259567363s] May 11 18:44:31.432: INFO: Created: latency-svc-c7d6c May 11 18:44:31.521: INFO: Got endpoints: latency-svc-c7d6c [6.645666162s] May 11 18:44:31.812: INFO: Created: latency-svc-bzz8p May 11 18:44:32.114: INFO: Got endpoints: latency-svc-bzz8p [6.774706208s] May 11 18:44:32.194: INFO: Created: latency-svc-ckvrm May 11 18:44:32.570: INFO: Got endpoints: latency-svc-ckvrm [6.71794054s] May 11 18:44:32.864: INFO: Created: latency-svc-f55p2 May 11 18:44:32.906: INFO: Got endpoints: latency-svc-f55p2 [6.283233217s] May 11 18:44:33.103: INFO: Created: latency-svc-jslvn May 11 18:44:33.287: INFO: Got endpoints: latency-svc-jslvn [6.453791188s] May 11 18:44:33.450: INFO: Created: latency-svc-vq7mn May 11 18:44:33.605: INFO: Got endpoints: latency-svc-vq7mn [6.125903888s] May 11 18:44:33.619: INFO: Created: latency-svc-zzkwf May 11 18:44:33.691: INFO: Got endpoints: latency-svc-zzkwf [5.821709365s] May 11 18:44:34.516: INFO: Created: latency-svc-7n6z8 May 11 18:44:34.970: INFO: Got endpoints: latency-svc-7n6z8 [6.667363025s] May 11 18:44:34.972: INFO: Created: latency-svc-zzqmd May 11 18:44:35.037: INFO: Got endpoints: latency-svc-zzqmd [6.554274644s] May 11 18:44:35.886: INFO: Created: latency-svc-c6qxz May 11 18:44:35.962: INFO: Got endpoints: latency-svc-c6qxz [7.283229435s] May 11 18:44:36.300: INFO: Created: latency-svc-nxf62 May 11 18:44:36.383: INFO: Got endpoints: latency-svc-nxf62 [7.514548812s] May 11 18:44:36.417: INFO: Created: latency-svc-lb26j May 11 18:44:36.670: INFO: Got endpoints: latency-svc-lb26j [7.426656241s] May 11 18:44:36.783: INFO: Created: latency-svc-vm6fn May 11 18:44:36.832: INFO: Got endpoints: latency-svc-vm6fn [6.962942721s] May 11 18:44:37.345: INFO: Created: latency-svc-cf2c5 May 11 18:44:37.593: INFO: Got endpoints: latency-svc-cf2c5 [6.90332581s] May 11 18:44:37.596: INFO: Created: latency-svc-b4nn7 May 11 18:44:37.778: INFO: Got endpoints: latency-svc-b4nn7 [6.760588779s] May 11 18:44:38.024: INFO: Created: latency-svc-wjtnh May 11 18:44:38.029: INFO: Got endpoints: latency-svc-wjtnh [6.507623382s] May 11 18:44:38.119: INFO: Created: latency-svc-mrwc2 May 11 18:44:38.797: INFO: Got endpoints: latency-svc-mrwc2 [6.683094787s] May 11 18:44:39.265: INFO: Created: latency-svc-5mc2b May 11 18:44:39.600: INFO: Got endpoints: latency-svc-5mc2b [7.029915707s] May 11 18:44:40.094: INFO: Created: latency-svc-pl2jd May 11 18:44:40.427: INFO: Got endpoints: latency-svc-pl2jd [7.520040834s] May 11 18:44:40.513: INFO: Created: latency-svc-mgnxr May 11 18:44:40.833: INFO: Got endpoints: latency-svc-mgnxr [7.546355125s] May 11 18:44:40.836: INFO: Created: latency-svc-7zdqc May 11 18:44:40.845: INFO: Got endpoints: latency-svc-7zdqc [7.240447939s] May 11 18:44:41.092: INFO: Created: latency-svc-rxwvx May 11 18:44:41.118: INFO: Got endpoints: latency-svc-rxwvx [7.427280016s] May 11 18:44:41.736: INFO: Created: latency-svc-kll66 May 11 18:44:42.048: INFO: Got endpoints: latency-svc-kll66 [7.077661523s] May 11 18:44:42.350: INFO: Created: latency-svc-lktd5 May 11 18:44:42.352: INFO: Got endpoints: latency-svc-lktd5 [7.315410128s] May 11 18:44:42.389: INFO: Created: latency-svc-tdn4h May 11 18:44:42.569: INFO: Got endpoints: latency-svc-tdn4h [6.606545981s] May 11 18:44:42.918: INFO: Created: latency-svc-l82cp May 11 18:44:43.379: INFO: Got endpoints: latency-svc-l82cp [6.995823656s] May 11 18:44:43.918: INFO: Created: latency-svc-dgw8x May 11 18:44:44.228: INFO: Got endpoints: latency-svc-dgw8x [7.558125s] May 11 18:44:44.240: INFO: Created: latency-svc-xxstg May 11 18:44:44.629: INFO: Got endpoints: latency-svc-xxstg [7.797236535s] May 11 18:44:44.953: INFO: Created: latency-svc-hs2h4 May 11 18:44:44.956: INFO: Got endpoints: latency-svc-hs2h4 [7.362464257s] May 11 18:44:45.224: INFO: Created: latency-svc-l7sgd May 11 18:44:45.563: INFO: Got endpoints: latency-svc-l7sgd [7.785216445s] May 11 18:44:45.584: INFO: Created: latency-svc-jq8wm May 11 18:44:45.650: INFO: Got endpoints: latency-svc-jq8wm [7.621155365s] May 11 18:44:46.438: INFO: Created: latency-svc-g72cb May 11 18:44:47.139: INFO: Got endpoints: latency-svc-g72cb [8.341240376s] May 11 18:44:47.143: INFO: Created: latency-svc-tfh4s May 11 18:44:47.235: INFO: Got endpoints: latency-svc-tfh4s [7.63546661s] May 11 18:44:47.878: INFO: Created: latency-svc-t4c7k May 11 18:44:47.883: INFO: Got endpoints: latency-svc-t4c7k [7.456757759s] May 11 18:44:48.449: INFO: Created: latency-svc-rzlpz May 11 18:44:48.464: INFO: Got endpoints: latency-svc-rzlpz [7.63091243s] May 11 18:44:48.500: INFO: Created: latency-svc-rftbt May 11 18:44:48.687: INFO: Got endpoints: latency-svc-rftbt [7.841679279s] May 11 18:44:48.743: INFO: Created: latency-svc-mk4ql May 11 18:44:49.012: INFO: Got endpoints: latency-svc-mk4ql [7.893814564s] May 11 18:44:49.281: INFO: Created: latency-svc-nxgpn May 11 18:44:49.391: INFO: Got endpoints: latency-svc-nxgpn [7.342369818s] May 11 18:44:49.635: INFO: Created: latency-svc-8j2h7 May 11 18:44:49.637: INFO: Got endpoints: latency-svc-8j2h7 [7.284871266s] May 11 18:44:49.698: INFO: Created: latency-svc-562g2 May 11 18:44:49.826: INFO: Got endpoints: latency-svc-562g2 [7.257521461s] May 11 18:44:50.318: INFO: Created: latency-svc-dxd6l May 11 18:44:50.354: INFO: Got endpoints: latency-svc-dxd6l [6.97540198s] May 11 18:44:50.906: INFO: Created: latency-svc-4c48m May 11 18:44:50.923: INFO: Got endpoints: latency-svc-4c48m [6.694869547s] May 11 18:44:51.364: INFO: Created: latency-svc-7jxfd May 11 18:44:51.608: INFO: Got endpoints: latency-svc-7jxfd [6.979080044s] May 11 18:44:51.858: INFO: Created: latency-svc-jp5pq May 11 18:44:51.931: INFO: Got endpoints: latency-svc-jp5pq [6.974779537s] May 11 18:44:52.096: INFO: Created: latency-svc-sh2nf May 11 18:44:52.112: INFO: Got endpoints: latency-svc-sh2nf [6.548723894s] May 11 18:44:52.184: INFO: Created: latency-svc-ghl6b May 11 18:44:52.479: INFO: Got endpoints: latency-svc-ghl6b [6.829242041s] May 11 18:44:52.532: INFO: Created: latency-svc-wrjkz May 11 18:44:52.846: INFO: Got endpoints: latency-svc-wrjkz [5.706958246s] May 11 18:44:53.093: INFO: Created: latency-svc-ffx8r May 11 18:44:53.467: INFO: Got endpoints: latency-svc-ffx8r [6.231838605s] May 11 18:44:53.892: INFO: Created: latency-svc-9bwtg May 11 18:44:54.259: INFO: Got endpoints: latency-svc-9bwtg [6.375363936s] May 11 18:44:54.259: INFO: Created: latency-svc-tltxv May 11 18:44:54.269: INFO: Got endpoints: latency-svc-tltxv [5.80505002s] May 11 18:44:54.717: INFO: Created: latency-svc-qzqp6 May 11 18:44:54.743: INFO: Got endpoints: latency-svc-qzqp6 [6.056194542s] May 11 18:44:54.911: INFO: Created: latency-svc-vfsvf May 11 18:44:54.947: INFO: Got endpoints: latency-svc-vfsvf [5.93467569s] May 11 18:44:55.476: INFO: Created: latency-svc-d8qfm May 11 18:44:55.558: INFO: Got endpoints: latency-svc-d8qfm [6.167372015s] May 11 18:44:55.809: INFO: Created: latency-svc-r6vh9 May 11 18:44:55.890: INFO: Got endpoints: latency-svc-r6vh9 [6.252973644s] May 11 18:44:56.540: INFO: Created: latency-svc-nwqw5 May 11 18:44:56.543: INFO: Got endpoints: latency-svc-nwqw5 [6.716546819s] May 11 18:44:56.803: INFO: Created: latency-svc-gj895 May 11 18:44:57.182: INFO: Got endpoints: latency-svc-gj895 [6.827465427s] May 11 18:44:57.183: INFO: Created: latency-svc-d954z May 11 18:44:57.366: INFO: Got endpoints: latency-svc-d954z [6.443277269s] May 11 18:44:57.406: INFO: Created: latency-svc-298g8 May 11 18:44:57.528: INFO: Got endpoints: latency-svc-298g8 [5.91994242s] May 11 18:44:57.586: INFO: Created: latency-svc-2sckp May 11 18:44:57.627: INFO: Got endpoints: latency-svc-2sckp [5.696360562s] May 11 18:44:57.731: INFO: Created: latency-svc-z788p May 11 18:44:57.733: INFO: Got endpoints: latency-svc-z788p [5.621132873s] May 11 18:44:58.384: INFO: Created: latency-svc-xgvgv May 11 18:44:58.762: INFO: Got endpoints: latency-svc-xgvgv [6.282270237s] May 11 18:44:58.995: INFO: Created: latency-svc-kgczs May 11 18:44:59.057: INFO: Got endpoints: latency-svc-kgczs [6.211214197s] May 11 18:44:59.327: INFO: Created: latency-svc-k86dx May 11 18:44:59.935: INFO: Got endpoints: latency-svc-k86dx [6.468107991s] May 11 18:44:59.988: INFO: Created: latency-svc-ghf7j May 11 18:45:00.186: INFO: Got endpoints: latency-svc-ghf7j [5.926937797s] May 11 18:45:00.431: INFO: Created: latency-svc-948sv May 11 18:45:00.492: INFO: Got endpoints: latency-svc-948sv [6.22285502s] May 11 18:45:00.683: INFO: Created: latency-svc-7rsv9 May 11 18:45:00.685: INFO: Got endpoints: latency-svc-7rsv9 [5.941711225s] May 11 18:45:01.721: INFO: Created: latency-svc-9x5tj May 11 18:45:01.759: INFO: Got endpoints: latency-svc-9x5tj [6.811674109s] May 11 18:45:02.276: INFO: Created: latency-svc-cp96j May 11 18:45:02.284: INFO: Got endpoints: latency-svc-cp96j [6.726181303s] May 11 18:45:02.480: INFO: Created: latency-svc-twgf6 May 11 18:45:02.483: INFO: Got endpoints: latency-svc-twgf6 [6.592393343s] May 11 18:45:02.952: INFO: Created: latency-svc-vvh9s May 11 18:45:02.955: INFO: Got endpoints: latency-svc-vvh9s [6.412145538s] May 11 18:45:03.133: INFO: Created: latency-svc-c7h6k May 11 18:45:03.324: INFO: Got endpoints: latency-svc-c7h6k [6.142181982s] May 11 18:45:03.515: INFO: Created: latency-svc-s8ndc May 11 18:45:03.574: INFO: Got endpoints: latency-svc-s8ndc [6.208093716s] May 11 18:45:03.784: INFO: Created: latency-svc-w7g8g May 11 18:45:04.133: INFO: Got endpoints: latency-svc-w7g8g [6.604084607s] May 11 18:45:04.135: INFO: Created: latency-svc-hgkgx May 11 18:45:04.193: INFO: Got endpoints: latency-svc-hgkgx [6.565405606s] May 11 18:45:04.342: INFO: Created: latency-svc-nmwjz May 11 18:45:04.875: INFO: Got endpoints: latency-svc-nmwjz [7.141374677s] May 11 18:45:04.924: INFO: Created: latency-svc-d6l8f May 11 18:45:05.355: INFO: Got endpoints: latency-svc-d6l8f [6.593129761s] May 11 18:45:05.463: INFO: Created: latency-svc-4fq26 May 11 18:45:05.517: INFO: Got endpoints: latency-svc-4fq26 [6.460375351s] May 11 18:45:05.809: INFO: Created: latency-svc-rdw67 May 11 18:45:05.873: INFO: Got endpoints: latency-svc-rdw67 [5.938288823s] May 11 18:45:05.875: INFO: Created: latency-svc-zf94h May 11 18:45:05.895: INFO: Got endpoints: latency-svc-zf94h [5.709247514s] May 11 18:45:06.205: INFO: Created: latency-svc-qgq5j May 11 18:45:06.231: INFO: Got endpoints: latency-svc-qgq5j [5.739087885s] May 11 18:45:06.426: INFO: Created: latency-svc-4grn9 May 11 18:45:06.444: INFO: Got endpoints: latency-svc-4grn9 [5.758886929s] May 11 18:45:06.841: INFO: Created: latency-svc-m7ln9 May 11 18:45:07.175: INFO: Got endpoints: latency-svc-m7ln9 [5.415852891s] May 11 18:45:07.231: INFO: Created: latency-svc-d7wgd May 11 18:45:07.516: INFO: Got endpoints: latency-svc-d7wgd [5.232139218s] May 11 18:45:07.581: INFO: Created: latency-svc-v6sb9 May 11 18:45:07.695: INFO: Got endpoints: latency-svc-v6sb9 [5.212343435s] May 11 18:45:08.200: INFO: Created: latency-svc-wfkgp May 11 18:45:08.208: INFO: Got endpoints: latency-svc-wfkgp [5.253168222s] May 11 18:45:08.240: INFO: Created: latency-svc-f25vz May 11 18:45:08.289: INFO: Got endpoints: latency-svc-f25vz [4.964800402s] May 11 18:45:08.414: INFO: Created: latency-svc-v5psf May 11 18:45:08.439: INFO: Got endpoints: latency-svc-v5psf [4.864294883s] May 11 18:45:08.510: INFO: Created: latency-svc-g5lvh May 11 18:45:08.554: INFO: Got endpoints: latency-svc-g5lvh [4.421485255s] May 11 18:45:08.608: INFO: Created: latency-svc-qtnxh May 11 18:45:08.714: INFO: Got endpoints: latency-svc-qtnxh [4.521025794s] May 11 18:45:08.749: INFO: Created: latency-svc-g5lrc May 11 18:45:08.782: INFO: Got endpoints: latency-svc-g5lrc [3.906974287s] May 11 18:45:08.882: INFO: Created: latency-svc-lh6s2 May 11 18:45:09.163: INFO: Created: latency-svc-khk7x May 11 18:45:09.164: INFO: Got endpoints: latency-svc-lh6s2 [3.80882539s] May 11 18:45:09.438: INFO: Got endpoints: latency-svc-khk7x [3.920273332s] May 11 18:45:09.726: INFO: Created: latency-svc-5cd8k May 11 18:45:10.014: INFO: Got endpoints: latency-svc-5cd8k [4.13997182s] May 11 18:45:10.015: INFO: Created: latency-svc-2hd25 May 11 18:45:10.049: INFO: Got endpoints: latency-svc-2hd25 [4.153992522s] May 11 18:45:10.254: INFO: Created: latency-svc-ckvr4 May 11 18:45:10.257: INFO: Got endpoints: latency-svc-ckvr4 [4.025661593s] May 11 18:45:10.351: INFO: Created: latency-svc-bsx7q May 11 18:45:10.468: INFO: Got endpoints: latency-svc-bsx7q [4.02351013s] May 11 18:45:11.202: INFO: Created: latency-svc-5k96w May 11 18:45:11.677: INFO: Got endpoints: latency-svc-5k96w [4.502483541s] May 11 18:45:11.680: INFO: Created: latency-svc-wcpmg May 11 18:45:11.711: INFO: Got endpoints: latency-svc-wcpmg [4.194672899s] May 11 18:45:12.618: INFO: Created: latency-svc-hs2bh May 11 18:45:12.642: INFO: Got endpoints: latency-svc-hs2bh [4.947333731s] May 11 18:45:13.110: INFO: Created: latency-svc-f5hv2 May 11 18:45:13.312: INFO: Got endpoints: latency-svc-f5hv2 [5.103382211s] May 11 18:45:13.315: INFO: Created: latency-svc-k7gm5 May 11 18:45:13.659: INFO: Got endpoints: latency-svc-k7gm5 [5.370096624s] May 11 18:45:13.730: INFO: Created: latency-svc-xzfwm May 11 18:45:13.839: INFO: Got endpoints: latency-svc-xzfwm [5.400790531s] May 11 18:45:13.839: INFO: Latencies: [926.929291ms 1.000367274s 1.002040262s 1.049420102s 1.062897679s 1.077738317s 1.084173833s 1.161462734s 1.199104563s 1.219252112s 1.323786969s 1.385275398s 1.443113811s 1.45134488s 1.499095629s 1.516358603s 1.604247453s 1.71689292s 1.858313407s 1.910318221s 2.06095437s 2.189600087s 2.250696532s 2.28429408s 2.353322095s 2.390266015s 2.402941561s 2.463061404s 2.536639439s 2.587265675s 2.603408366s 2.705588246s 2.725106786s 2.741155062s 2.7887984s 2.846592887s 2.929516676s 2.932853278s 2.94286752s 2.958043383s 3.029644172s 3.086539199s 3.175413843s 3.200870617s 3.280911507s 3.341275019s 3.401935354s 3.425500763s 3.431746543s 3.442614025s 3.455213759s 3.46704294s 3.503953194s 3.555954199s 3.568028369s 3.617287764s 3.664355578s 3.80882539s 3.906974287s 3.920273332s 3.927341666s 3.93904934s 4.02351013s 4.025661593s 4.044939082s 4.114249192s 4.13997182s 4.153992522s 4.187056375s 4.194672899s 4.213825515s 4.231358862s 4.252900549s 4.361538334s 4.410447627s 4.413941864s 4.421485255s 4.440498604s 4.474212332s 4.483933672s 4.502483541s 4.521025794s 4.641118547s 4.651619834s 4.760925722s 4.861816059s 4.864294883s 4.912285181s 4.922708281s 4.934189747s 4.946906115s 4.947333731s 4.964800402s 5.059523236s 5.103382211s 5.190714817s 5.212343435s 5.232139218s 5.253168222s 5.333097986s 5.358092056s 5.370096624s 5.400790531s 5.415852891s 5.621132873s 5.633782658s 5.696360562s 5.706958246s 5.709247514s 5.739087885s 5.758886929s 5.80505002s 5.821709365s 5.840674896s 5.86780306s 5.91994242s 5.926937797s 5.93467569s 5.938288823s 5.941711225s 5.983660871s 6.02723823s 6.056194542s 6.125903888s 6.142181982s 6.167372015s 6.208093716s 6.211214197s 6.22285502s 6.231838605s 6.252973644s 6.259567363s 6.282270237s 6.283233217s 6.375363936s 6.412145538s 6.443277269s 6.453791188s 6.460375351s 6.468107991s 6.507623382s 6.548723894s 6.554274644s 6.565405606s 6.592393343s 6.593129761s 6.604084607s 6.606545981s 6.645666162s 6.667363025s 6.683094787s 6.684241065s 6.694869547s 6.716546819s 6.71794054s 6.726181303s 6.760588779s 6.774706208s 6.811674109s 6.827465427s 6.829242041s 6.90332581s 6.92115609s 6.962942721s 6.974779537s 6.97540198s 6.979080044s 6.995823656s 7.000754506s 7.006409938s 7.029915707s 7.077661523s 7.141374677s 7.186734329s 7.240447939s 7.257521461s 7.283229435s 7.284871266s 7.293084693s 7.315410128s 7.341821308s 7.342369818s 7.343256949s 7.362464257s 7.426656241s 7.427280016s 7.456757759s 7.508645332s 7.514548812s 7.520040834s 7.546355125s 7.558125s 7.621155365s 7.63091243s 7.63546661s 7.785216445s 7.797236535s 7.841679279s 7.893814564s 8.341240376s] May 11 18:45:13.840: INFO: 50 %ile: 5.358092056s May 11 18:45:13.840: INFO: 90 %ile: 7.341821308s May 11 18:45:13.840: INFO: 99 %ile: 7.893814564s May 11 18:45:13.840: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:45:13.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4274" for this suite. May 11 18:47:49.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:47:50.173: INFO: namespace svc-latency-4274 deletion completed in 2m36.294065693s • [SLOW TEST:237.790 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:47:50.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-4360 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4360 STEP: Deleting pre-stop pod May 11 18:48:15.919: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:48:15.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4360" for this suite. May 11 18:48:57.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:48:57.233: INFO: namespace prestop-4360 deletion completed in 41.007875706s • [SLOW TEST:67.060 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:48:57.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5552 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 11 18:48:59.075: INFO: Found 0 stateful pods, waiting for 3 May 11 18:49:09.336: INFO: Found 2 stateful pods, waiting for 3 May 11 18:49:19.423: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 18:49:19.423: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 18:49:19.423: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 18:49:29.154: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 18:49:29.154: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 18:49:29.154: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 11 18:49:29.179: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 11 18:49:39.543: INFO: Updating stateful set ss2 May 11 18:49:39.740: INFO: Waiting for Pod statefulset-5552/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 18:49:49.816: INFO: Waiting for Pod statefulset-5552/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 11 18:50:02.683: INFO: Found 2 stateful pods, waiting for 3 May 11 18:50:12.802: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 18:50:12.802: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 18:50:12.802: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 18:50:22.689: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 18:50:22.689: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 18:50:22.689: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 11 18:50:22.713: INFO: Updating stateful set ss2 May 11 18:50:22.853: INFO: Waiting for Pod statefulset-5552/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 18:50:32.879: INFO: Updating stateful set ss2 May 11 18:50:33.363: INFO: Waiting for StatefulSet statefulset-5552/ss2 to complete update May 11 18:50:33.363: INFO: Waiting for Pod statefulset-5552/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 18:50:43.666: INFO: Waiting for StatefulSet statefulset-5552/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 11 18:50:53.369: INFO: Deleting all statefulset in ns statefulset-5552 May 11 18:50:53.371: INFO: Scaling statefulset ss2 to 0 May 11 18:51:13.519: INFO: Waiting for statefulset status.replicas updated to 0 May 11 18:51:13.521: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:51:13.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5552" for this suite. May 11 18:51:26.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:51:26.110: INFO: namespace statefulset-5552 deletion completed in 12.570820008s • [SLOW TEST:148.877 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:51:26.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 11 18:51:26.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-970' May 11 18:51:36.965: INFO: stderr: "" May 11 18:51:36.965: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 11 18:51:37.969: INFO: Selector matched 1 pods for map[app:redis] May 11 18:51:37.969: INFO: Found 0 / 1 May 11 18:51:40.024: INFO: Selector matched 1 pods for map[app:redis] May 11 18:51:40.024: INFO: Found 0 / 1 May 11 18:51:41.623: INFO: Selector matched 1 pods for map[app:redis] May 11 18:51:41.623: INFO: Found 0 / 1 May 11 18:51:42.090: INFO: Selector matched 1 pods for map[app:redis] May 11 18:51:42.090: INFO: Found 0 / 1 May 11 18:51:43.164: INFO: Selector matched 1 pods for map[app:redis] May 11 18:51:43.164: INFO: Found 0 / 1 May 11 18:51:43.970: INFO: Selector matched 1 pods for map[app:redis] May 11 18:51:43.970: INFO: Found 0 / 1 May 11 18:51:45.149: INFO: Selector matched 1 pods for map[app:redis] May 11 18:51:45.149: INFO: Found 0 / 1 May 11 18:51:45.968: INFO: Selector matched 1 pods for map[app:redis] May 11 18:51:45.968: INFO: Found 0 / 1 May 11 18:51:46.970: INFO: Selector matched 1 pods for map[app:redis] May 11 18:51:46.970: INFO: Found 1 / 1 May 11 18:51:46.970: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 11 18:51:46.974: INFO: Selector matched 1 pods for map[app:redis] May 11 18:51:46.974: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 18:51:46.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-rhghf --namespace=kubectl-970 -p {"metadata":{"annotations":{"x":"y"}}}' May 11 18:51:47.057: INFO: stderr: "" May 11 18:51:47.057: INFO: stdout: "pod/redis-master-rhghf patched\n" STEP: checking annotations May 11 18:51:47.092: INFO: Selector matched 1 pods for map[app:redis] May 11 18:51:47.092: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:51:47.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-970" for this suite. May 11 18:52:13.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:52:13.222: INFO: namespace kubectl-970 deletion completed in 26.126283883s • [SLOW TEST:47.111 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:52:13.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-847e46e3-fd56-4085-b011-b57933486444 May 11 18:52:13.782: INFO: Pod name my-hostname-basic-847e46e3-fd56-4085-b011-b57933486444: Found 0 pods out of 1 May 11 18:52:18.838: INFO: Pod name my-hostname-basic-847e46e3-fd56-4085-b011-b57933486444: Found 1 pods out of 1 May 11 18:52:18.838: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-847e46e3-fd56-4085-b011-b57933486444" are running May 11 18:52:20.845: INFO: Pod "my-hostname-basic-847e46e3-fd56-4085-b011-b57933486444-28q6v" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 18:52:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 18:52:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-847e46e3-fd56-4085-b011-b57933486444]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 18:52:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-847e46e3-fd56-4085-b011-b57933486444]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 18:52:13 +0000 UTC Reason: Message:}]) May 11 18:52:20.845: INFO: Trying to dial the pod May 11 18:52:26.595: INFO: Controller my-hostname-basic-847e46e3-fd56-4085-b011-b57933486444: Got expected result from replica 1 [my-hostname-basic-847e46e3-fd56-4085-b011-b57933486444-28q6v]: "my-hostname-basic-847e46e3-fd56-4085-b011-b57933486444-28q6v", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:52:26.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6610" for this suite. May 11 18:52:34.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:52:34.946: INFO: namespace replication-controller-6610 deletion completed in 8.347507167s • [SLOW TEST:21.725 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:52:34.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 11 18:52:35.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9513' May 11 18:52:36.110: INFO: stderr: "" May 11 18:52:36.110: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 18:52:36.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9513' May 11 18:52:36.377: INFO: stderr: "" May 11 18:52:36.377: INFO: stdout: "update-demo-nautilus-68vql update-demo-nautilus-9f2b2 " May 11 18:52:36.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-68vql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:52:36.606: INFO: stderr: "" May 11 18:52:36.606: INFO: stdout: "" May 11 18:52:36.606: INFO: update-demo-nautilus-68vql is created but not running May 11 18:52:41.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9513' May 11 18:52:41.702: INFO: stderr: "" May 11 18:52:41.702: INFO: stdout: "update-demo-nautilus-68vql update-demo-nautilus-9f2b2 " May 11 18:52:41.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-68vql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:52:41.788: INFO: stderr: "" May 11 18:52:41.788: INFO: stdout: "" May 11 18:52:41.788: INFO: update-demo-nautilus-68vql is created but not running May 11 18:52:46.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9513' May 11 18:52:46.892: INFO: stderr: "" May 11 18:52:46.892: INFO: stdout: "update-demo-nautilus-68vql update-demo-nautilus-9f2b2 " May 11 18:52:46.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-68vql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:52:46.977: INFO: stderr: "" May 11 18:52:46.977: INFO: stdout: "true" May 11 18:52:46.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-68vql -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:52:47.166: INFO: stderr: "" May 11 18:52:47.166: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:52:47.166: INFO: validating pod update-demo-nautilus-68vql May 11 18:52:47.302: INFO: got data: { "image": "nautilus.jpg" } May 11 18:52:47.302: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:52:47.302: INFO: update-demo-nautilus-68vql is verified up and running May 11 18:52:47.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9f2b2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:52:47.461: INFO: stderr: "" May 11 18:52:47.461: INFO: stdout: "true" May 11 18:52:47.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9f2b2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:52:47.541: INFO: stderr: "" May 11 18:52:47.541: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:52:47.541: INFO: validating pod update-demo-nautilus-9f2b2 May 11 18:52:47.545: INFO: got data: { "image": "nautilus.jpg" } May 11 18:52:47.545: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:52:47.545: INFO: update-demo-nautilus-9f2b2 is verified up and running STEP: scaling down the replication controller May 11 18:52:47.548: INFO: scanned /root for discovery docs: May 11 18:52:47.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9513' May 11 18:52:48.759: INFO: stderr: "" May 11 18:52:48.759: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 18:52:48.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9513' May 11 18:52:48.853: INFO: stderr: "" May 11 18:52:48.853: INFO: stdout: "update-demo-nautilus-68vql update-demo-nautilus-9f2b2 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 18:52:53.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9513' May 11 18:52:53.964: INFO: stderr: "" May 11 18:52:53.964: INFO: stdout: "update-demo-nautilus-68vql update-demo-nautilus-9f2b2 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 18:52:58.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9513' May 11 18:52:59.055: INFO: stderr: "" May 11 18:52:59.055: INFO: stdout: "update-demo-nautilus-68vql update-demo-nautilus-9f2b2 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 18:53:04.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9513' May 11 18:53:04.140: INFO: stderr: "" May 11 18:53:04.140: INFO: stdout: "update-demo-nautilus-9f2b2 " May 11 18:53:04.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9f2b2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:53:04.231: INFO: stderr: "" May 11 18:53:04.231: INFO: stdout: "true" May 11 18:53:04.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9f2b2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:53:04.320: INFO: stderr: "" May 11 18:53:04.320: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:53:04.320: INFO: validating pod update-demo-nautilus-9f2b2 May 11 18:53:04.323: INFO: got data: { "image": "nautilus.jpg" } May 11 18:53:04.323: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:53:04.323: INFO: update-demo-nautilus-9f2b2 is verified up and running STEP: scaling up the replication controller May 11 18:53:04.325: INFO: scanned /root for discovery docs: May 11 18:53:04.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9513' May 11 18:53:05.437: INFO: stderr: "" May 11 18:53:05.437: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 18:53:05.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9513' May 11 18:53:05.537: INFO: stderr: "" May 11 18:53:05.537: INFO: stdout: "update-demo-nautilus-9f2b2 update-demo-nautilus-n5g4z " May 11 18:53:05.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9f2b2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:53:05.633: INFO: stderr: "" May 11 18:53:05.633: INFO: stdout: "true" May 11 18:53:05.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9f2b2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:53:05.741: INFO: stderr: "" May 11 18:53:05.741: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:53:05.741: INFO: validating pod update-demo-nautilus-9f2b2 May 11 18:53:05.743: INFO: got data: { "image": "nautilus.jpg" } May 11 18:53:05.743: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:53:05.743: INFO: update-demo-nautilus-9f2b2 is verified up and running May 11 18:53:05.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n5g4z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:53:05.828: INFO: stderr: "" May 11 18:53:05.828: INFO: stdout: "" May 11 18:53:05.828: INFO: update-demo-nautilus-n5g4z is created but not running May 11 18:53:10.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9513' May 11 18:53:10.927: INFO: stderr: "" May 11 18:53:10.927: INFO: stdout: "update-demo-nautilus-9f2b2 update-demo-nautilus-n5g4z " May 11 18:53:10.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9f2b2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:53:11.010: INFO: stderr: "" May 11 18:53:11.010: INFO: stdout: "true" May 11 18:53:11.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9f2b2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:53:11.097: INFO: stderr: "" May 11 18:53:11.097: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:53:11.097: INFO: validating pod update-demo-nautilus-9f2b2 May 11 18:53:11.100: INFO: got data: { "image": "nautilus.jpg" } May 11 18:53:11.100: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:53:11.100: INFO: update-demo-nautilus-9f2b2 is verified up and running May 11 18:53:11.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n5g4z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:53:11.178: INFO: stderr: "" May 11 18:53:11.178: INFO: stdout: "true" May 11 18:53:11.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n5g4z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9513' May 11 18:53:11.271: INFO: stderr: "" May 11 18:53:11.271: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:53:11.271: INFO: validating pod update-demo-nautilus-n5g4z May 11 18:53:11.274: INFO: got data: { "image": "nautilus.jpg" } May 11 18:53:11.274: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:53:11.274: INFO: update-demo-nautilus-n5g4z is verified up and running STEP: using delete to clean up resources May 11 18:53:11.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9513' May 11 18:53:11.384: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 18:53:11.384: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 18:53:11.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9513' May 11 18:53:11.484: INFO: stderr: "No resources found.\n" May 11 18:53:11.484: INFO: stdout: "" May 11 18:53:11.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9513 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 18:53:11.568: INFO: stderr: "" May 11 18:53:11.568: INFO: stdout: "update-demo-nautilus-9f2b2\nupdate-demo-nautilus-n5g4z\n" May 11 18:53:12.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9513' May 11 18:53:12.489: INFO: stderr: "No resources found.\n" May 11 18:53:12.489: INFO: stdout: "" May 11 18:53:12.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9513 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 18:53:12.788: INFO: stderr: "" May 11 18:53:12.788: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:53:12.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9513" for this suite. May 11 18:53:41.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:53:41.943: INFO: namespace kubectl-9513 deletion completed in 29.151127083s • [SLOW TEST:66.996 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:53:41.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 18:53:42.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1680' May 11 18:53:42.470: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 18:53:42.470: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 11 18:53:42.995: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-frlbj] May 11 18:53:42.995: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-frlbj" in namespace "kubectl-1680" to be "running and ready" May 11 18:53:43.035: INFO: Pod "e2e-test-nginx-rc-frlbj": Phase="Pending", Reason="", readiness=false. Elapsed: 39.967482ms May 11 18:53:45.321: INFO: Pod "e2e-test-nginx-rc-frlbj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326299554s May 11 18:53:47.326: INFO: Pod "e2e-test-nginx-rc-frlbj": Phase="Running", Reason="", readiness=true. Elapsed: 4.330729397s May 11 18:53:47.326: INFO: Pod "e2e-test-nginx-rc-frlbj" satisfied condition "running and ready" May 11 18:53:47.326: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-frlbj] May 11 18:53:47.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1680' May 11 18:53:47.495: INFO: stderr: "" May 11 18:53:47.495: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 11 18:53:47.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1680' May 11 18:53:47.600: INFO: stderr: "" May 11 18:53:47.601: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:53:47.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1680" for this suite. May 11 18:53:55.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:53:55.704: INFO: namespace kubectl-1680 deletion completed in 8.100727181s • [SLOW TEST:13.761 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:53:55.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1317 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1317 STEP: Creating statefulset with conflicting port in namespace statefulset-1317 STEP: Waiting until pod test-pod will start running in namespace statefulset-1317 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1317 May 11 18:54:03.835: INFO: Observed stateful pod in namespace: statefulset-1317, name: ss-0, uid: 5fa3e578-a5b1-4ef6-8839-daccd4f13d75, status phase: Pending. Waiting for statefulset controller to delete. May 11 18:54:12.146: INFO: Observed stateful pod in namespace: statefulset-1317, name: ss-0, uid: 5fa3e578-a5b1-4ef6-8839-daccd4f13d75, status phase: Failed. Waiting for statefulset controller to delete. May 11 18:54:12.204: INFO: Observed stateful pod in namespace: statefulset-1317, name: ss-0, uid: 5fa3e578-a5b1-4ef6-8839-daccd4f13d75, status phase: Failed. Waiting for statefulset controller to delete. May 11 18:54:12.332: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1317 STEP: Removing pod with conflicting port in namespace statefulset-1317 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1317 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 11 18:54:18.633: INFO: Deleting all statefulset in ns statefulset-1317 May 11 18:54:18.636: INFO: Scaling statefulset ss to 0 May 11 18:54:28.650: INFO: Waiting for statefulset status.replicas updated to 0 May 11 18:54:28.653: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:54:28.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1317" for this suite. May 11 18:54:36.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:54:36.779: INFO: namespace statefulset-1317 deletion completed in 8.096895597s • [SLOW TEST:41.074 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:54:36.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-9d5ed3cf-f5d8-467e-a165-0f239b409189 STEP: Creating secret with name s-test-opt-upd-cc6b180e-a292-4970-bc08-84afef198243 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9d5ed3cf-f5d8-467e-a165-0f239b409189 STEP: Updating secret s-test-opt-upd-cc6b180e-a292-4970-bc08-84afef198243 STEP: Creating secret with name s-test-opt-create-6d954d18-2af5-43aa-9e16-41e53bdcee7c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:54:50.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2411" for this suite. May 11 18:55:15.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:55:15.306: INFO: namespace projected-2411 deletion completed in 24.309639399s • [SLOW TEST:38.527 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:55:15.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:55:26.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9396" for this suite. May 11 18:55:48.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:55:48.355: INFO: namespace replication-controller-9396 deletion completed in 22.15051977s • [SLOW TEST:33.049 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:55:48.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-beac9965-d7bd-42d2-a749-3e1c6d4e8afb STEP: Creating a pod to test consume configMaps May 11 18:55:49.673: INFO: Waiting up to 5m0s for pod "pod-configmaps-2787de77-3235-441c-af7d-6370c2d6e464" in namespace "configmap-4572" to be "success or failure" May 11 18:55:49.924: INFO: Pod "pod-configmaps-2787de77-3235-441c-af7d-6370c2d6e464": Phase="Pending", Reason="", readiness=false. Elapsed: 251.168883ms May 11 18:55:51.928: INFO: Pod "pod-configmaps-2787de77-3235-441c-af7d-6370c2d6e464": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255180706s May 11 18:55:53.931: INFO: Pod "pod-configmaps-2787de77-3235-441c-af7d-6370c2d6e464": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258511084s May 11 18:55:55.936: INFO: Pod "pod-configmaps-2787de77-3235-441c-af7d-6370c2d6e464": Phase="Running", Reason="", readiness=true. Elapsed: 6.262645136s May 11 18:55:57.939: INFO: Pod "pod-configmaps-2787de77-3235-441c-af7d-6370c2d6e464": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.265892404s STEP: Saw pod success May 11 18:55:57.939: INFO: Pod "pod-configmaps-2787de77-3235-441c-af7d-6370c2d6e464" satisfied condition "success or failure" May 11 18:55:57.941: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-2787de77-3235-441c-af7d-6370c2d6e464 container configmap-volume-test: STEP: delete the pod May 11 18:55:57.973: INFO: Waiting for pod pod-configmaps-2787de77-3235-441c-af7d-6370c2d6e464 to disappear May 11 18:55:57.977: INFO: Pod pod-configmaps-2787de77-3235-441c-af7d-6370c2d6e464 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:55:57.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4572" for this suite. May 11 18:56:03.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:56:04.060: INFO: namespace configmap-4572 deletion completed in 6.079579009s • [SLOW TEST:15.704 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:56:04.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-8c480c4a-73bf-4db7-814d-b29e846dcba3 STEP: Creating a pod to test consume configMaps May 11 18:56:06.016: INFO: Waiting up to 5m0s for pod "pod-configmaps-00774463-722d-4bff-a7f2-b20f2e757873" in namespace "configmap-4807" to be "success or failure" May 11 18:56:06.034: INFO: Pod "pod-configmaps-00774463-722d-4bff-a7f2-b20f2e757873": Phase="Pending", Reason="", readiness=false. Elapsed: 18.173911ms May 11 18:56:08.133: INFO: Pod "pod-configmaps-00774463-722d-4bff-a7f2-b20f2e757873": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116377319s May 11 18:56:10.137: INFO: Pod "pod-configmaps-00774463-722d-4bff-a7f2-b20f2e757873": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120709881s May 11 18:56:12.184: INFO: Pod "pod-configmaps-00774463-722d-4bff-a7f2-b20f2e757873": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167574865s STEP: Saw pod success May 11 18:56:12.184: INFO: Pod "pod-configmaps-00774463-722d-4bff-a7f2-b20f2e757873" satisfied condition "success or failure" May 11 18:56:12.186: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-00774463-722d-4bff-a7f2-b20f2e757873 container configmap-volume-test: STEP: delete the pod May 11 18:56:12.574: INFO: Waiting for pod pod-configmaps-00774463-722d-4bff-a7f2-b20f2e757873 to disappear May 11 18:56:12.805: INFO: Pod pod-configmaps-00774463-722d-4bff-a7f2-b20f2e757873 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:56:12.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4807" for this suite. May 11 18:56:18.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:56:18.966: INFO: namespace configmap-4807 deletion completed in 6.15818917s • [SLOW TEST:14.906 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:56:18.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 18:56:24.506: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 18:56:24.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9758" for this suite. May 11 18:56:30.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:56:30.938: INFO: namespace container-runtime-9758 deletion completed in 6.202045696s • [SLOW TEST:11.972 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 18:56:30.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-6cf4a3ff-5d1b-454a-9fe3-cc69aad7f076 in namespace container-probe-6528 May 11 18:56:37.499: INFO: Started pod busybox-6cf4a3ff-5d1b-454a-9fe3-cc69aad7f076 in namespace container-probe-6528 STEP: checking the pod's current state and verifying that restartCount is present May 11 18:56:37.501: INFO: Initial restart count of pod busybox-6cf4a3ff-5d1b-454a-9fe3-cc69aad7f076 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:00:38.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6528" for this suite. May 11 19:00:45.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:00:45.396: INFO: namespace container-probe-6528 deletion completed in 6.364309543s • [SLOW TEST:254.457 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:00:45.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8952.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8952.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8952.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8952.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8952.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8952.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 19:01:05.306: INFO: DNS probes using dns-8952/dns-test-4c99a43d-a4a8-4cd3-9fa9-0305d970f446 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:01:06.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8952" for this suite. May 11 19:01:13.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:01:13.871: INFO: namespace dns-8952 deletion completed in 7.00668667s • [SLOW TEST:28.475 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:01:13.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0511 19:01:25.309016 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 19:01:25.309: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:01:25.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6940" for this suite. May 11 19:01:31.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:01:31.448: INFO: namespace gc-6940 deletion completed in 6.137293852s • [SLOW TEST:17.577 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:01:31.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 19:01:31.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-2300' May 11 19:01:36.699: INFO: stderr: "" May 11 19:01:36.699: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 11 19:01:46.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-2300 -o json' May 11 19:01:53.902: INFO: stderr: "" May 11 19:01:53.902: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-11T19:01:36Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-2300\",\n \"resourceVersion\": \"10307118\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2300/pods/e2e-test-nginx-pod\",\n \"uid\": \"37d2e853-b97a-4bc3-a515-5d48e8af3cf8\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-82l6f\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-82l6f\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-82l6f\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T19:01:36Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T19:01:41Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T19:01:41Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T19:01:36Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b2610dd26cf7bf8d74e0766422a70959f3c3733a3c09d55c457f553356a00525\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-11T19:01:40Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.6\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-11T19:01:36Z\"\n }\n}\n" STEP: replace the image in the pod May 11 19:01:53.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2300' May 11 19:01:54.309: INFO: stderr: "" May 11 19:01:54.309: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 11 19:01:54.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2300' May 11 19:02:02.001: INFO: stderr: "" May 11 19:02:02.001: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:02:02.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2300" for this suite. May 11 19:02:08.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:02:08.249: INFO: namespace kubectl-2300 deletion completed in 6.182792825s • [SLOW TEST:36.800 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:02:08.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:02:16.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-983" for this suite. May 11 19:02:25.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:02:25.212: INFO: namespace kubelet-test-983 deletion completed in 8.350358723s • [SLOW TEST:16.963 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:02:25.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 11 19:02:25.958: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 19:02:26.344: INFO: Waiting for terminating namespaces to be deleted... May 11 19:02:26.346: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 11 19:02:26.353: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 19:02:26.353: INFO: Container kube-proxy ready: true, restart count 0 May 11 19:02:26.353: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 19:02:26.353: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:02:26.353: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 11 19:02:26.362: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 11 19:02:26.362: INFO: Container kube-proxy ready: true, restart count 0 May 11 19:02:26.362: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 11 19:02:26.362: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:02:26.362: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 11 19:02:26.362: INFO: Container coredns ready: true, restart count 0 May 11 19:02:26.362: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 11 19:02:26.362: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3bfd3c66-9a7c-4fd4-ad0e-5568e1a3af3f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3bfd3c66-9a7c-4fd4-ad0e-5568e1a3af3f off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-3bfd3c66-9a7c-4fd4-ad0e-5568e1a3af3f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:02:43.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2842" for this suite. May 11 19:03:04.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:03:05.114: INFO: namespace sched-pred-2842 deletion completed in 21.815338176s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:39.901 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:03:05.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6520.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6520.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6520.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6520.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 19:03:15.640: INFO: DNS probes using dns-test-30932df4-9234-4d1a-8d58-23079be8e81d succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6520.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6520.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6520.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6520.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 19:03:28.512: INFO: File wheezy_udp@dns-test-service-3.dns-6520.svc.cluster.local from pod dns-6520/dns-test-563803b4-7df9-46bc-a610-7a921ede3284 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 19:03:28.514: INFO: File jessie_udp@dns-test-service-3.dns-6520.svc.cluster.local from pod dns-6520/dns-test-563803b4-7df9-46bc-a610-7a921ede3284 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 19:03:28.514: INFO: Lookups using dns-6520/dns-test-563803b4-7df9-46bc-a610-7a921ede3284 failed for: [wheezy_udp@dns-test-service-3.dns-6520.svc.cluster.local jessie_udp@dns-test-service-3.dns-6520.svc.cluster.local] May 11 19:03:33.573: INFO: File wheezy_udp@dns-test-service-3.dns-6520.svc.cluster.local from pod dns-6520/dns-test-563803b4-7df9-46bc-a610-7a921ede3284 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 19:03:33.577: INFO: File jessie_udp@dns-test-service-3.dns-6520.svc.cluster.local from pod dns-6520/dns-test-563803b4-7df9-46bc-a610-7a921ede3284 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 19:03:33.577: INFO: Lookups using dns-6520/dns-test-563803b4-7df9-46bc-a610-7a921ede3284 failed for: [wheezy_udp@dns-test-service-3.dns-6520.svc.cluster.local jessie_udp@dns-test-service-3.dns-6520.svc.cluster.local] May 11 19:03:38.519: INFO: File wheezy_udp@dns-test-service-3.dns-6520.svc.cluster.local from pod dns-6520/dns-test-563803b4-7df9-46bc-a610-7a921ede3284 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 19:03:38.522: INFO: File jessie_udp@dns-test-service-3.dns-6520.svc.cluster.local from pod dns-6520/dns-test-563803b4-7df9-46bc-a610-7a921ede3284 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 19:03:38.523: INFO: Lookups using dns-6520/dns-test-563803b4-7df9-46bc-a610-7a921ede3284 failed for: [wheezy_udp@dns-test-service-3.dns-6520.svc.cluster.local jessie_udp@dns-test-service-3.dns-6520.svc.cluster.local] May 11 19:03:44.708: INFO: DNS probes using dns-test-563803b4-7df9-46bc-a610-7a921ede3284 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6520.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6520.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6520.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6520.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 19:03:56.173: INFO: DNS probes using dns-test-911a6ed9-736a-4b2e-ab2e-db307df0566d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:03:56.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6520" for this suite. May 11 19:04:08.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:04:08.523: INFO: namespace dns-6520 deletion completed in 12.021368154s • [SLOW TEST:63.408 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:04:08.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-99b95c6b-24d8-4275-b750-f370e2723e78 STEP: Creating a pod to test consume secrets May 11 19:04:09.621: INFO: Waiting up to 5m0s for pod "pod-secrets-eb0b913f-5bef-4bd5-a55d-006788ffd631" in namespace "secrets-6944" to be "success or failure" May 11 19:04:09.941: INFO: Pod "pod-secrets-eb0b913f-5bef-4bd5-a55d-006788ffd631": Phase="Pending", Reason="", readiness=false. Elapsed: 319.271478ms May 11 19:04:11.991: INFO: Pod "pod-secrets-eb0b913f-5bef-4bd5-a55d-006788ffd631": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369920897s May 11 19:04:13.994: INFO: Pod "pod-secrets-eb0b913f-5bef-4bd5-a55d-006788ffd631": Phase="Pending", Reason="", readiness=false. Elapsed: 4.373242635s May 11 19:04:16.015: INFO: Pod "pod-secrets-eb0b913f-5bef-4bd5-a55d-006788ffd631": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394144254s May 11 19:04:18.123: INFO: Pod "pod-secrets-eb0b913f-5bef-4bd5-a55d-006788ffd631": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.501393596s STEP: Saw pod success May 11 19:04:18.123: INFO: Pod "pod-secrets-eb0b913f-5bef-4bd5-a55d-006788ffd631" satisfied condition "success or failure" May 11 19:04:18.125: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-eb0b913f-5bef-4bd5-a55d-006788ffd631 container secret-env-test: STEP: delete the pod May 11 19:04:18.698: INFO: Waiting for pod pod-secrets-eb0b913f-5bef-4bd5-a55d-006788ffd631 to disappear May 11 19:04:18.998: INFO: Pod pod-secrets-eb0b913f-5bef-4bd5-a55d-006788ffd631 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:04:18.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6944" for this suite. May 11 19:04:27.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:04:27.108: INFO: namespace secrets-6944 deletion completed in 8.105149274s • [SLOW TEST:18.585 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:04:27.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4689.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4689.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 19:04:39.949: INFO: DNS probes using dns-4689/dns-test-70bc9d26-16de-41c1-b34b-6cef87e4de2e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:04:40.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4689" for this suite. May 11 19:04:47.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:04:47.172: INFO: namespace dns-4689 deletion completed in 6.394519504s • [SLOW TEST:20.063 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:04:47.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 11 19:04:47.243: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:04:47.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2571" for this suite. May 11 19:04:53.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:04:53.433: INFO: namespace kubectl-2571 deletion completed in 6.111076976s • [SLOW TEST:6.260 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:04:53.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 19:04:53.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 11 19:04:53.829: INFO: stderr: "" May 11 19:04:53.829: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:04:53.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9023" for this suite. May 11 19:05:01.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:05:01.091: INFO: namespace kubectl-9023 deletion completed in 7.258395942s • [SLOW TEST:7.657 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:05:01.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-936/configmap-test-987caadf-e1ba-474f-9560-f81ec1718655 STEP: Creating a pod to test consume configMaps May 11 19:05:01.950: INFO: Waiting up to 5m0s for pod "pod-configmaps-df10ffbe-2e0b-4c58-b5b5-c5faa2fadb8d" in namespace "configmap-936" to be "success or failure" May 11 19:05:02.201: INFO: Pod "pod-configmaps-df10ffbe-2e0b-4c58-b5b5-c5faa2fadb8d": Phase="Pending", Reason="", readiness=false. Elapsed: 251.173746ms May 11 19:05:04.406: INFO: Pod "pod-configmaps-df10ffbe-2e0b-4c58-b5b5-c5faa2fadb8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.456095302s May 11 19:05:07.520: INFO: Pod "pod-configmaps-df10ffbe-2e0b-4c58-b5b5-c5faa2fadb8d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.570562855s May 11 19:05:09.523: INFO: Pod "pod-configmaps-df10ffbe-2e0b-4c58-b5b5-c5faa2fadb8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.573656553s STEP: Saw pod success May 11 19:05:09.524: INFO: Pod "pod-configmaps-df10ffbe-2e0b-4c58-b5b5-c5faa2fadb8d" satisfied condition "success or failure" May 11 19:05:09.526: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-df10ffbe-2e0b-4c58-b5b5-c5faa2fadb8d container env-test: STEP: delete the pod May 11 19:05:09.658: INFO: Waiting for pod pod-configmaps-df10ffbe-2e0b-4c58-b5b5-c5faa2fadb8d to disappear May 11 19:05:10.093: INFO: Pod pod-configmaps-df10ffbe-2e0b-4c58-b5b5-c5faa2fadb8d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:05:10.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-936" for this suite. May 11 19:05:16.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:05:16.184: INFO: namespace configmap-936 deletion completed in 6.08720153s • [SLOW TEST:15.092 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:05:16.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 11 19:05:16.378: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3759,SelfLink:/api/v1/namespaces/watch-3759/configmaps/e2e-watch-test-label-changed,UID:69b90b01-5b54-4352-bdea-e61a32bad127,ResourceVersion:10307832,Generation:0,CreationTimestamp:2020-05-11 19:05:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 19:05:16.378: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3759,SelfLink:/api/v1/namespaces/watch-3759/configmaps/e2e-watch-test-label-changed,UID:69b90b01-5b54-4352-bdea-e61a32bad127,ResourceVersion:10307833,Generation:0,CreationTimestamp:2020-05-11 19:05:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 11 19:05:16.378: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3759,SelfLink:/api/v1/namespaces/watch-3759/configmaps/e2e-watch-test-label-changed,UID:69b90b01-5b54-4352-bdea-e61a32bad127,ResourceVersion:10307834,Generation:0,CreationTimestamp:2020-05-11 19:05:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 11 19:05:26.485: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3759,SelfLink:/api/v1/namespaces/watch-3759/configmaps/e2e-watch-test-label-changed,UID:69b90b01-5b54-4352-bdea-e61a32bad127,ResourceVersion:10307855,Generation:0,CreationTimestamp:2020-05-11 19:05:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 19:05:26.485: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3759,SelfLink:/api/v1/namespaces/watch-3759/configmaps/e2e-watch-test-label-changed,UID:69b90b01-5b54-4352-bdea-e61a32bad127,ResourceVersion:10307856,Generation:0,CreationTimestamp:2020-05-11 19:05:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 11 19:05:26.485: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3759,SelfLink:/api/v1/namespaces/watch-3759/configmaps/e2e-watch-test-label-changed,UID:69b90b01-5b54-4352-bdea-e61a32bad127,ResourceVersion:10307857,Generation:0,CreationTimestamp:2020-05-11 19:05:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:05:26.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3759" for this suite. May 11 19:05:34.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:05:34.564: INFO: namespace watch-3759 deletion completed in 8.069215985s • [SLOW TEST:18.380 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:05:34.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 19:05:57.226: INFO: Container started at 2020-05-11 19:05:39 +0000 UTC, pod became ready at 2020-05-11 19:05:56 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:05:57.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3202" for this suite. May 11 19:06:23.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:06:23.459: INFO: namespace container-probe-3202 deletion completed in 26.230726491s • [SLOW TEST:48.894 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:06:23.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 11 19:06:24.604: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-a,UID:a34168eb-db16-4698-98de-76e5e3f1332b,ResourceVersion:10307997,Generation:0,CreationTimestamp:2020-05-11 19:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 19:06:24.604: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-a,UID:a34168eb-db16-4698-98de-76e5e3f1332b,ResourceVersion:10307997,Generation:0,CreationTimestamp:2020-05-11 19:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 11 19:06:34.611: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-a,UID:a34168eb-db16-4698-98de-76e5e3f1332b,ResourceVersion:10308017,Generation:0,CreationTimestamp:2020-05-11 19:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 11 19:06:34.611: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-a,UID:a34168eb-db16-4698-98de-76e5e3f1332b,ResourceVersion:10308017,Generation:0,CreationTimestamp:2020-05-11 19:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 11 19:06:44.620: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-a,UID:a34168eb-db16-4698-98de-76e5e3f1332b,ResourceVersion:10308037,Generation:0,CreationTimestamp:2020-05-11 19:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 19:06:44.620: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-a,UID:a34168eb-db16-4698-98de-76e5e3f1332b,ResourceVersion:10308037,Generation:0,CreationTimestamp:2020-05-11 19:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 11 19:06:54.624: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-a,UID:a34168eb-db16-4698-98de-76e5e3f1332b,ResourceVersion:10308058,Generation:0,CreationTimestamp:2020-05-11 19:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 19:06:54.624: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-a,UID:a34168eb-db16-4698-98de-76e5e3f1332b,ResourceVersion:10308058,Generation:0,CreationTimestamp:2020-05-11 19:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 11 19:07:04.899: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-b,UID:77bf991f-4b1d-4237-9452-f5eb99b1c3b5,ResourceVersion:10308080,Generation:0,CreationTimestamp:2020-05-11 19:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 19:07:04.900: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-b,UID:77bf991f-4b1d-4237-9452-f5eb99b1c3b5,ResourceVersion:10308080,Generation:0,CreationTimestamp:2020-05-11 19:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 11 19:07:14.906: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-b,UID:77bf991f-4b1d-4237-9452-f5eb99b1c3b5,ResourceVersion:10308100,Generation:0,CreationTimestamp:2020-05-11 19:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 19:07:14.906: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4745,SelfLink:/api/v1/namespaces/watch-4745/configmaps/e2e-watch-test-configmap-b,UID:77bf991f-4b1d-4237-9452-f5eb99b1c3b5,ResourceVersion:10308100,Generation:0,CreationTimestamp:2020-05-11 19:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:07:24.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4745" for this suite. May 11 19:07:30.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:07:31.023: INFO: namespace watch-4745 deletion completed in 6.114191777s • [SLOW TEST:67.564 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:07:31.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 19:07:31.222: INFO: Waiting up to 5m0s for pod "pod-691cd97f-9772-4810-889f-a3bcfb97cc1c" in namespace "emptydir-5509" to be "success or failure" May 11 19:07:31.324: INFO: Pod "pod-691cd97f-9772-4810-889f-a3bcfb97cc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 101.216744ms May 11 19:07:33.328: INFO: Pod "pod-691cd97f-9772-4810-889f-a3bcfb97cc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105204221s May 11 19:07:35.332: INFO: Pod "pod-691cd97f-9772-4810-889f-a3bcfb97cc1c": Phase="Running", Reason="", readiness=true. Elapsed: 4.109269387s May 11 19:07:37.335: INFO: Pod "pod-691cd97f-9772-4810-889f-a3bcfb97cc1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112430722s STEP: Saw pod success May 11 19:07:37.335: INFO: Pod "pod-691cd97f-9772-4810-889f-a3bcfb97cc1c" satisfied condition "success or failure" May 11 19:07:37.337: INFO: Trying to get logs from node iruya-worker2 pod pod-691cd97f-9772-4810-889f-a3bcfb97cc1c container test-container: STEP: delete the pod May 11 19:07:37.773: INFO: Waiting for pod pod-691cd97f-9772-4810-889f-a3bcfb97cc1c to disappear May 11 19:07:37.904: INFO: Pod pod-691cd97f-9772-4810-889f-a3bcfb97cc1c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:07:37.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5509" for this suite. May 11 19:07:44.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:07:44.652: INFO: namespace emptydir-5509 deletion completed in 6.656838012s • [SLOW TEST:13.628 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:07:44.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 11 19:07:44.732: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9553" to be "success or failure" May 11 19:07:44.749: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.091609ms May 11 19:07:46.771: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038489977s May 11 19:07:48.774: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042044483s May 11 19:07:50.779: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046583449s May 11 19:07:52.783: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050196356s May 11 19:07:56.528: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.79551787s May 11 19:07:58.532: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.799167975s May 11 19:08:00.826: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.093840701s STEP: Saw pod success May 11 19:08:00.826: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 11 19:08:00.828: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 11 19:08:01.942: INFO: Waiting for pod pod-host-path-test to disappear May 11 19:08:02.210: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:08:02.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9553" for this suite. May 11 19:08:10.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:08:10.350: INFO: namespace hostpath-9553 deletion completed in 8.134309691s • [SLOW TEST:25.697 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:08:10.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:09:08.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8395" for this suite. May 11 19:09:17.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:09:17.147: INFO: namespace container-runtime-8395 deletion completed in 8.361428506s • [SLOW TEST:66.797 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:09:17.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 11 19:09:25.319: INFO: Pod pod-hostip-3bbd602b-14a2-440e-9ce3-f82c5f91a1c6 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:09:25.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4034" for this suite. May 11 19:09:49.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:09:49.811: INFO: namespace pods-4034 deletion completed in 24.488625485s • [SLOW TEST:32.663 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:09:49.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-a567a1a5-4dcc-4a23-88f7-c9ecb0f6f7cf STEP: Creating secret with name secret-projected-all-test-volume-67b27506-e8c8-4c08-99f8-7cf934b5cabb STEP: Creating a pod to test Check all projections for projected volume plugin May 11 19:09:49.962: INFO: Waiting up to 5m0s for pod "projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4" in namespace "projected-3229" to be "success or failure" May 11 19:09:49.966: INFO: Pod "projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.493468ms May 11 19:09:52.139: INFO: Pod "projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177834375s May 11 19:09:54.144: INFO: Pod "projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182769185s May 11 19:09:56.148: INFO: Pod "projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18657508s May 11 19:09:58.386: INFO: Pod "projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424301431s May 11 19:10:00.763: INFO: Pod "projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.801104324s May 11 19:10:02.930: INFO: Pod "projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.968877513s May 11 19:10:04.936: INFO: Pod "projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.97401251s STEP: Saw pod success May 11 19:10:04.936: INFO: Pod "projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4" satisfied condition "success or failure" May 11 19:10:04.938: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4 container projected-all-volume-test: STEP: delete the pod May 11 19:10:05.435: INFO: Waiting for pod projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4 to disappear May 11 19:10:05.738: INFO: Pod projected-volume-4bd98a8e-4601-4ae9-ad2e-f1746ce8afc4 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:10:05.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3229" for this suite. May 11 19:10:11.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:10:12.016: INFO: namespace projected-3229 deletion completed in 6.275156888s • [SLOW TEST:22.204 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:10:12.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-30309fd9-dea5-4641-adca-d339bf53ee6b STEP: Creating configMap with name cm-test-opt-upd-52f13b22-84f2-4651-b9fc-9cad796aa1f9 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-30309fd9-dea5-4641-adca-d339bf53ee6b STEP: Updating configmap cm-test-opt-upd-52f13b22-84f2-4651-b9fc-9cad796aa1f9 STEP: Creating configMap with name cm-test-opt-create-e5ab37c4-83e2-49e6-9582-d9830b5b8822 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:11:58.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6413" for this suite. May 11 19:12:28.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:12:28.925: INFO: namespace configmap-6413 deletion completed in 30.330614456s • [SLOW TEST:136.909 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:12:28.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-07a58bfd-6001-447a-9e98-42a73b991b67 STEP: Creating a pod to test consume configMaps May 11 19:12:29.631: INFO: Waiting up to 5m0s for pod "pod-configmaps-78d36a84-e78a-43ae-a795-8956ab717db5" in namespace "configmap-2494" to be "success or failure" May 11 19:12:29.664: INFO: Pod "pod-configmaps-78d36a84-e78a-43ae-a795-8956ab717db5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.534513ms May 11 19:12:31.729: INFO: Pod "pod-configmaps-78d36a84-e78a-43ae-a795-8956ab717db5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097864546s May 11 19:12:33.734: INFO: Pod "pod-configmaps-78d36a84-e78a-43ae-a795-8956ab717db5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102708649s May 11 19:12:35.737: INFO: Pod "pod-configmaps-78d36a84-e78a-43ae-a795-8956ab717db5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105764478s STEP: Saw pod success May 11 19:12:35.737: INFO: Pod "pod-configmaps-78d36a84-e78a-43ae-a795-8956ab717db5" satisfied condition "success or failure" May 11 19:12:35.739: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-78d36a84-e78a-43ae-a795-8956ab717db5 container configmap-volume-test: STEP: delete the pod May 11 19:12:35.941: INFO: Waiting for pod pod-configmaps-78d36a84-e78a-43ae-a795-8956ab717db5 to disappear May 11 19:12:35.969: INFO: Pod pod-configmaps-78d36a84-e78a-43ae-a795-8956ab717db5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:12:35.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2494" for this suite. May 11 19:12:42.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:12:42.211: INFO: namespace configmap-2494 deletion completed in 6.237363415s • [SLOW TEST:13.285 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:12:42.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5f53499d-65d8-435c-82b3-07336b5a1d16 STEP: Creating a pod to test consume secrets May 11 19:12:42.876: INFO: Waiting up to 5m0s for pod "pod-secrets-a405b248-25d8-4d77-92e5-0ac1d0169e49" in namespace "secrets-3598" to be "success or failure" May 11 19:12:43.060: INFO: Pod "pod-secrets-a405b248-25d8-4d77-92e5-0ac1d0169e49": Phase="Pending", Reason="", readiness=false. Elapsed: 184.566556ms May 11 19:12:45.082: INFO: Pod "pod-secrets-a405b248-25d8-4d77-92e5-0ac1d0169e49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206367672s May 11 19:12:47.086: INFO: Pod "pod-secrets-a405b248-25d8-4d77-92e5-0ac1d0169e49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210687518s May 11 19:12:49.100: INFO: Pod "pod-secrets-a405b248-25d8-4d77-92e5-0ac1d0169e49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224677429s May 11 19:12:51.105: INFO: Pod "pod-secrets-a405b248-25d8-4d77-92e5-0ac1d0169e49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.228800309s STEP: Saw pod success May 11 19:12:51.105: INFO: Pod "pod-secrets-a405b248-25d8-4d77-92e5-0ac1d0169e49" satisfied condition "success or failure" May 11 19:12:51.108: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a405b248-25d8-4d77-92e5-0ac1d0169e49 container secret-volume-test: STEP: delete the pod May 11 19:12:51.126: INFO: Waiting for pod pod-secrets-a405b248-25d8-4d77-92e5-0ac1d0169e49 to disappear May 11 19:12:51.130: INFO: Pod pod-secrets-a405b248-25d8-4d77-92e5-0ac1d0169e49 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:12:51.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3598" for this suite. May 11 19:12:57.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:12:57.354: INFO: namespace secrets-3598 deletion completed in 6.221040576s • [SLOW TEST:15.143 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:12:57.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 19:12:57.635: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1eceea95-7595-4e13-8db8-3defb404eafd", Controller:(*bool)(0xc002bb828a), BlockOwnerDeletion:(*bool)(0xc002bb828b)}} May 11 19:12:57.820: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f26a28f4-353f-4c85-9968-023ef6495abd", Controller:(*bool)(0xc00306d512), BlockOwnerDeletion:(*bool)(0xc00306d513)}} May 11 19:12:57.825: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5caa3458-c13e-4400-b596-3a2f222c16f4", Controller:(*bool)(0xc001366dda), BlockOwnerDeletion:(*bool)(0xc001366ddb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:13:03.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1208" for this suite. May 11 19:13:13.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:13:13.804: INFO: namespace gc-1208 deletion completed in 10.564083764s • [SLOW TEST:16.450 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:13:13.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2d6ee580-f74c-41dd-af19-5d995a92d80c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2d6ee580-f74c-41dd-af19-5d995a92d80c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:14:30.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9620" for this suite. May 11 19:14:49.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:14:50.570: INFO: namespace projected-9620 deletion completed in 20.226290559s • [SLOW TEST:96.766 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:14:50.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-6496cb9b-6e7d-4e0a-a688-c6f33b101a07 STEP: Creating a pod to test consume configMaps May 11 19:14:51.192: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-148617a3-5260-4e52-98ad-6bf6a513369d" in namespace "projected-5352" to be "success or failure" May 11 19:14:51.207: INFO: Pod "pod-projected-configmaps-148617a3-5260-4e52-98ad-6bf6a513369d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.241071ms May 11 19:14:53.216: INFO: Pod "pod-projected-configmaps-148617a3-5260-4e52-98ad-6bf6a513369d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024340219s May 11 19:14:55.219: INFO: Pod "pod-projected-configmaps-148617a3-5260-4e52-98ad-6bf6a513369d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027222311s May 11 19:14:57.222: INFO: Pod "pod-projected-configmaps-148617a3-5260-4e52-98ad-6bf6a513369d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030118087s STEP: Saw pod success May 11 19:14:57.222: INFO: Pod "pod-projected-configmaps-148617a3-5260-4e52-98ad-6bf6a513369d" satisfied condition "success or failure" May 11 19:14:57.224: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-148617a3-5260-4e52-98ad-6bf6a513369d container projected-configmap-volume-test: STEP: delete the pod May 11 19:14:57.511: INFO: Waiting for pod pod-projected-configmaps-148617a3-5260-4e52-98ad-6bf6a513369d to disappear May 11 19:14:57.682: INFO: Pod pod-projected-configmaps-148617a3-5260-4e52-98ad-6bf6a513369d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:14:57.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5352" for this suite. May 11 19:15:03.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:15:03.771: INFO: namespace projected-5352 deletion completed in 6.087051661s • [SLOW TEST:13.201 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:15:03.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 19:15:03.884: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36e6c582-d3de-4d79-8d0b-7bdaff9e4ddd" in namespace "downward-api-8188" to be "success or failure" May 11 19:15:03.887: INFO: Pod "downwardapi-volume-36e6c582-d3de-4d79-8d0b-7bdaff9e4ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.180643ms May 11 19:15:06.156: INFO: Pod "downwardapi-volume-36e6c582-d3de-4d79-8d0b-7bdaff9e4ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272586853s May 11 19:15:08.324: INFO: Pod "downwardapi-volume-36e6c582-d3de-4d79-8d0b-7bdaff9e4ddd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.439880973s STEP: Saw pod success May 11 19:15:08.324: INFO: Pod "downwardapi-volume-36e6c582-d3de-4d79-8d0b-7bdaff9e4ddd" satisfied condition "success or failure" May 11 19:15:08.327: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-36e6c582-d3de-4d79-8d0b-7bdaff9e4ddd container client-container: STEP: delete the pod May 11 19:15:08.534: INFO: Waiting for pod downwardapi-volume-36e6c582-d3de-4d79-8d0b-7bdaff9e4ddd to disappear May 11 19:15:08.540: INFO: Pod downwardapi-volume-36e6c582-d3de-4d79-8d0b-7bdaff9e4ddd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:15:08.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8188" for this suite. May 11 19:15:14.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:15:14.723: INFO: namespace downward-api-8188 deletion completed in 6.180909292s • [SLOW TEST:10.952 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:15:14.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 19:15:14.857: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 11 19:15:14.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:14.888: INFO: Number of nodes with available pods: 0 May 11 19:15:14.888: INFO: Node iruya-worker is running more than one daemon pod May 11 19:15:15.892: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:15.894: INFO: Number of nodes with available pods: 0 May 11 19:15:15.894: INFO: Node iruya-worker is running more than one daemon pod May 11 19:15:16.917: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:16.921: INFO: Number of nodes with available pods: 0 May 11 19:15:16.921: INFO: Node iruya-worker is running more than one daemon pod May 11 19:15:17.899: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:17.902: INFO: Number of nodes with available pods: 0 May 11 19:15:17.902: INFO: Node iruya-worker is running more than one daemon pod May 11 19:15:18.996: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:19.046: INFO: Number of nodes with available pods: 0 May 11 19:15:19.046: INFO: Node iruya-worker is running more than one daemon pod May 11 19:15:20.112: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:20.117: INFO: Number of nodes with available pods: 0 May 11 19:15:20.117: INFO: Node iruya-worker is running more than one daemon pod May 11 19:15:20.893: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:20.897: INFO: Number of nodes with available pods: 2 May 11 19:15:20.897: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 11 19:15:20.999: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:20.999: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:21.022: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:22.025: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:22.025: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:22.027: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:23.024: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:23.025: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:23.027: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:24.066: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:24.066: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:24.069: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:25.026: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:25.026: INFO: Pod daemon-set-gnsqr is not available May 11 19:15:25.026: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:25.030: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:26.026: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:26.026: INFO: Pod daemon-set-gnsqr is not available May 11 19:15:26.026: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:26.031: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:27.027: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:27.027: INFO: Pod daemon-set-gnsqr is not available May 11 19:15:27.027: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:27.031: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:28.026: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:28.026: INFO: Pod daemon-set-gnsqr is not available May 11 19:15:28.026: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:28.031: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:29.025: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:29.026: INFO: Pod daemon-set-gnsqr is not available May 11 19:15:29.026: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:29.029: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:30.025: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:30.025: INFO: Pod daemon-set-gnsqr is not available May 11 19:15:30.025: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:30.028: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:31.025: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:31.025: INFO: Pod daemon-set-gnsqr is not available May 11 19:15:31.025: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:31.028: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:32.049: INFO: Wrong image for pod: daemon-set-gnsqr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:32.049: INFO: Pod daemon-set-gnsqr is not available May 11 19:15:32.049: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:32.053: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:33.026: INFO: Pod daemon-set-gmwl5 is not available May 11 19:15:33.026: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:33.030: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:34.163: INFO: Pod daemon-set-gmwl5 is not available May 11 19:15:34.163: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:34.167: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:35.026: INFO: Pod daemon-set-gmwl5 is not available May 11 19:15:35.026: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:35.029: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:36.025: INFO: Pod daemon-set-gmwl5 is not available May 11 19:15:36.025: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:36.027: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:37.025: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:37.028: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:38.072: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:38.075: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:39.026: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:39.026: INFO: Pod daemon-set-qdq6n is not available May 11 19:15:39.029: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:40.027: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:40.027: INFO: Pod daemon-set-qdq6n is not available May 11 19:15:40.031: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:41.026: INFO: Wrong image for pod: daemon-set-qdq6n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:15:41.026: INFO: Pod daemon-set-qdq6n is not available May 11 19:15:41.028: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:42.041: INFO: Pod daemon-set-hgpvs is not available May 11 19:15:42.045: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 11 19:15:42.091: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:42.094: INFO: Number of nodes with available pods: 1 May 11 19:15:42.094: INFO: Node iruya-worker2 is running more than one daemon pod May 11 19:15:43.099: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:43.103: INFO: Number of nodes with available pods: 1 May 11 19:15:43.103: INFO: Node iruya-worker2 is running more than one daemon pod May 11 19:15:44.247: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:44.250: INFO: Number of nodes with available pods: 1 May 11 19:15:44.250: INFO: Node iruya-worker2 is running more than one daemon pod May 11 19:15:45.099: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:45.102: INFO: Number of nodes with available pods: 1 May 11 19:15:45.102: INFO: Node iruya-worker2 is running more than one daemon pod May 11 19:15:46.127: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:15:46.129: INFO: Number of nodes with available pods: 2 May 11 19:15:46.129: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5128, will wait for the garbage collector to delete the pods May 11 19:15:46.197: INFO: Deleting DaemonSet.extensions daemon-set took: 6.194521ms May 11 19:15:46.598: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.220974ms May 11 19:16:02.384: INFO: Number of nodes with available pods: 0 May 11 19:16:02.384: INFO: Number of running nodes: 0, number of available pods: 0 May 11 19:16:02.387: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5128/daemonsets","resourceVersion":"10309551"},"items":null} May 11 19:16:02.453: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5128/pods","resourceVersion":"10309552"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:16:02.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5128" for this suite. May 11 19:16:13.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:16:13.985: INFO: namespace daemonsets-5128 deletion completed in 11.520472838s • [SLOW TEST:59.261 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:16:13.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 11 19:16:14.819: INFO: PodSpec: initContainers in spec.initContainers May 11 19:17:18.611: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6dd3dd33-4c38-4f17-a592-0ad69e9ed7c1", GenerateName:"", Namespace:"init-container-1835", SelfLink:"/api/v1/namespaces/init-container-1835/pods/pod-init-6dd3dd33-4c38-4f17-a592-0ad69e9ed7c1", UID:"2a854ced-98e0-4044-9a71-044d1219db8d", ResourceVersion:"10309756", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724821374, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"819983019"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xbjwg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020a2580), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xbjwg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xbjwg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xbjwg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0028dd558), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002008540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028dd5e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028dd600)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0028dd608), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0028dd60c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821374, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.23", StartTime:(*v1.Time)(0xc0016602c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002504ee0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002504f50)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3d63d100f37e68aebf672a1ac9f02952160fcd4cfde95c473617c316139a987d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0016603e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001660380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:17:18.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1835" for this suite. May 11 19:17:42.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:17:42.736: INFO: namespace init-container-1835 deletion completed in 24.106906085s • [SLOW TEST:88.750 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:17:42.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-4e978bd8-e8c5-4e1b-8801-9355af8ced81 STEP: Creating a pod to test consume configMaps May 11 19:17:44.365: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4dcf5f05-5871-4225-8cb1-82bc8c5e1fbd" in namespace "projected-7872" to be "success or failure" May 11 19:17:44.924: INFO: Pod "pod-projected-configmaps-4dcf5f05-5871-4225-8cb1-82bc8c5e1fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 559.143307ms May 11 19:17:46.930: INFO: Pod "pod-projected-configmaps-4dcf5f05-5871-4225-8cb1-82bc8c5e1fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.565242132s May 11 19:17:48.934: INFO: Pod "pod-projected-configmaps-4dcf5f05-5871-4225-8cb1-82bc8c5e1fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568994557s May 11 19:17:50.937: INFO: Pod "pod-projected-configmaps-4dcf5f05-5871-4225-8cb1-82bc8c5e1fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.572167793s May 11 19:17:52.941: INFO: Pod "pod-projected-configmaps-4dcf5f05-5871-4225-8cb1-82bc8c5e1fbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.575950923s STEP: Saw pod success May 11 19:17:52.941: INFO: Pod "pod-projected-configmaps-4dcf5f05-5871-4225-8cb1-82bc8c5e1fbd" satisfied condition "success or failure" May 11 19:17:52.943: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-4dcf5f05-5871-4225-8cb1-82bc8c5e1fbd container projected-configmap-volume-test: STEP: delete the pod May 11 19:17:53.308: INFO: Waiting for pod pod-projected-configmaps-4dcf5f05-5871-4225-8cb1-82bc8c5e1fbd to disappear May 11 19:17:53.311: INFO: Pod pod-projected-configmaps-4dcf5f05-5871-4225-8cb1-82bc8c5e1fbd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:17:53.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7872" for this suite. May 11 19:18:01.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:18:01.654: INFO: namespace projected-7872 deletion completed in 8.338492984s • [SLOW TEST:18.918 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:18:01.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 19:18:01.823: INFO: Waiting up to 5m0s for pod "pod-ff304f5f-5f84-4fba-a4bd-254e3cc99d59" in namespace "emptydir-350" to be "success or failure" May 11 19:18:01.839: INFO: Pod "pod-ff304f5f-5f84-4fba-a4bd-254e3cc99d59": Phase="Pending", Reason="", readiness=false. Elapsed: 15.593489ms May 11 19:18:03.843: INFO: Pod "pod-ff304f5f-5f84-4fba-a4bd-254e3cc99d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019865584s May 11 19:18:05.858: INFO: Pod "pod-ff304f5f-5f84-4fba-a4bd-254e3cc99d59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034571301s May 11 19:18:08.009: INFO: Pod "pod-ff304f5f-5f84-4fba-a4bd-254e3cc99d59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185753077s May 11 19:18:10.013: INFO: Pod "pod-ff304f5f-5f84-4fba-a4bd-254e3cc99d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.189977761s STEP: Saw pod success May 11 19:18:10.013: INFO: Pod "pod-ff304f5f-5f84-4fba-a4bd-254e3cc99d59" satisfied condition "success or failure" May 11 19:18:10.016: INFO: Trying to get logs from node iruya-worker2 pod pod-ff304f5f-5f84-4fba-a4bd-254e3cc99d59 container test-container: STEP: delete the pod May 11 19:18:10.079: INFO: Waiting for pod pod-ff304f5f-5f84-4fba-a4bd-254e3cc99d59 to disappear May 11 19:18:10.122: INFO: Pod pod-ff304f5f-5f84-4fba-a4bd-254e3cc99d59 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:18:10.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-350" for this suite. May 11 19:18:16.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:18:16.225: INFO: namespace emptydir-350 deletion completed in 6.099268159s • [SLOW TEST:14.570 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:18:16.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7461 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-7461 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7461 May 11 19:18:16.835: INFO: Found 0 stateful pods, waiting for 1 May 11 19:18:26.843: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 11 19:18:36.889: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 11 19:18:36.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 19:18:53.008: INFO: stderr: "I0511 19:18:52.697502 2964 log.go:172] (0xc0003d22c0) (0xc00063c960) Create stream\nI0511 19:18:52.697543 2964 log.go:172] (0xc0003d22c0) (0xc00063c960) Stream added, broadcasting: 1\nI0511 19:18:52.703884 2964 log.go:172] (0xc0003d22c0) Reply frame received for 1\nI0511 19:18:52.703928 2964 log.go:172] (0xc0003d22c0) (0xc00063ca00) Create stream\nI0511 19:18:52.703940 2964 log.go:172] (0xc0003d22c0) (0xc00063ca00) Stream added, broadcasting: 3\nI0511 19:18:52.704949 2964 log.go:172] (0xc0003d22c0) Reply frame received for 3\nI0511 19:18:52.704976 2964 log.go:172] (0xc0003d22c0) (0xc00053a000) Create stream\nI0511 19:18:52.704988 2964 log.go:172] (0xc0003d22c0) (0xc00053a000) Stream added, broadcasting: 5\nI0511 19:18:52.706142 2964 log.go:172] (0xc0003d22c0) Reply frame received for 5\nI0511 19:18:52.751092 2964 log.go:172] (0xc0003d22c0) Data frame received for 5\nI0511 19:18:52.751113 2964 log.go:172] (0xc00053a000) (5) Data frame handling\nI0511 19:18:52.751125 2964 log.go:172] (0xc00053a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 19:18:53.001789 2964 log.go:172] (0xc0003d22c0) Data frame received for 3\nI0511 19:18:53.001835 2964 log.go:172] (0xc00063ca00) (3) Data frame handling\nI0511 19:18:53.001857 2964 log.go:172] (0xc00063ca00) (3) Data frame sent\nI0511 19:18:53.002036 2964 log.go:172] (0xc0003d22c0) Data frame received for 3\nI0511 19:18:53.002058 2964 log.go:172] (0xc00063ca00) (3) Data frame handling\nI0511 19:18:53.002079 2964 log.go:172] (0xc0003d22c0) Data frame received for 5\nI0511 19:18:53.002110 2964 log.go:172] (0xc00053a000) (5) Data frame handling\nI0511 19:18:53.003648 2964 log.go:172] (0xc0003d22c0) Data frame received for 1\nI0511 19:18:53.003656 2964 log.go:172] (0xc00063c960) (1) Data frame handling\nI0511 19:18:53.003663 2964 log.go:172] (0xc00063c960) (1) Data frame sent\nI0511 19:18:53.003803 2964 log.go:172] (0xc0003d22c0) (0xc00063c960) Stream removed, broadcasting: 1\nI0511 19:18:53.003830 2964 log.go:172] (0xc0003d22c0) Go away received\nI0511 19:18:53.004133 2964 log.go:172] (0xc0003d22c0) (0xc00063c960) Stream removed, broadcasting: 1\nI0511 19:18:53.004148 2964 log.go:172] (0xc0003d22c0) (0xc00063ca00) Stream removed, broadcasting: 3\nI0511 19:18:53.004154 2964 log.go:172] (0xc0003d22c0) (0xc00053a000) Stream removed, broadcasting: 5\n" May 11 19:18:53.008: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 19:18:53.008: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 19:18:53.027: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 19:19:03.031: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 19:19:03.031: INFO: Waiting for statefulset status.replicas updated to 0 May 11 19:19:03.458: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:19:03.458: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:16 +0000 UTC }] May 11 19:19:03.458: INFO: May 11 19:19:03.458: INFO: StatefulSet ss has not reached scale 3, at 1 May 11 19:19:04.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.581437329s May 11 19:19:05.687: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.540514717s May 11 19:19:06.692: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.352284616s May 11 19:19:07.734: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.347977039s May 11 19:19:08.739: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.305521907s May 11 19:19:09.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.300691959s May 11 19:19:10.747: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.29630581s May 11 19:19:11.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.292997011s May 11 19:19:12.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 288.871402ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7461 May 11 19:19:13.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:19:14.644: INFO: stderr: "I0511 19:19:14.559182 2993 log.go:172] (0xc0009a0370) (0xc00090c640) Create stream\nI0511 19:19:14.559240 2993 log.go:172] (0xc0009a0370) (0xc00090c640) Stream added, broadcasting: 1\nI0511 19:19:14.561518 2993 log.go:172] (0xc0009a0370) Reply frame received for 1\nI0511 19:19:14.561548 2993 log.go:172] (0xc0009a0370) (0xc00090c6e0) Create stream\nI0511 19:19:14.561556 2993 log.go:172] (0xc0009a0370) (0xc00090c6e0) Stream added, broadcasting: 3\nI0511 19:19:14.562223 2993 log.go:172] (0xc0009a0370) Reply frame received for 3\nI0511 19:19:14.562257 2993 log.go:172] (0xc0009a0370) (0xc00092c000) Create stream\nI0511 19:19:14.562274 2993 log.go:172] (0xc0009a0370) (0xc00092c000) Stream added, broadcasting: 5\nI0511 19:19:14.562991 2993 log.go:172] (0xc0009a0370) Reply frame received for 5\nI0511 19:19:14.637698 2993 log.go:172] (0xc0009a0370) Data frame received for 3\nI0511 19:19:14.637726 2993 log.go:172] (0xc00090c6e0) (3) Data frame handling\nI0511 19:19:14.637740 2993 log.go:172] (0xc00090c6e0) (3) Data frame sent\nI0511 19:19:14.637845 2993 log.go:172] (0xc0009a0370) Data frame received for 3\nI0511 19:19:14.637859 2993 log.go:172] (0xc00090c6e0) (3) Data frame handling\nI0511 19:19:14.637874 2993 log.go:172] (0xc0009a0370) Data frame received for 5\nI0511 19:19:14.637893 2993 log.go:172] (0xc00092c000) (5) Data frame handling\nI0511 19:19:14.637904 2993 log.go:172] (0xc00092c000) (5) Data frame sent\nI0511 19:19:14.637910 2993 log.go:172] (0xc0009a0370) Data frame received for 5\nI0511 19:19:14.637915 2993 log.go:172] (0xc00092c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 19:19:14.639270 2993 log.go:172] (0xc0009a0370) Data frame received for 1\nI0511 19:19:14.639288 2993 log.go:172] (0xc00090c640) (1) Data frame handling\nI0511 19:19:14.639305 2993 log.go:172] (0xc00090c640) (1) Data frame sent\nI0511 19:19:14.639314 2993 log.go:172] (0xc0009a0370) (0xc00090c640) Stream removed, broadcasting: 1\nI0511 19:19:14.639459 2993 log.go:172] (0xc0009a0370) Go away received\nI0511 19:19:14.639646 2993 log.go:172] (0xc0009a0370) (0xc00090c640) Stream removed, broadcasting: 1\nI0511 19:19:14.639667 2993 log.go:172] (0xc0009a0370) (0xc00090c6e0) Stream removed, broadcasting: 3\nI0511 19:19:14.639678 2993 log.go:172] (0xc0009a0370) (0xc00092c000) Stream removed, broadcasting: 5\n" May 11 19:19:14.644: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 19:19:14.644: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 19:19:14.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:19:15.543: INFO: stderr: "I0511 19:19:15.466594 3013 log.go:172] (0xc0009de370) (0xc0009a06e0) Create stream\nI0511 19:19:15.466658 3013 log.go:172] (0xc0009de370) (0xc0009a06e0) Stream added, broadcasting: 1\nI0511 19:19:15.468908 3013 log.go:172] (0xc0009de370) Reply frame received for 1\nI0511 19:19:15.468948 3013 log.go:172] (0xc0009de370) (0xc000600140) Create stream\nI0511 19:19:15.468964 3013 log.go:172] (0xc0009de370) (0xc000600140) Stream added, broadcasting: 3\nI0511 19:19:15.470089 3013 log.go:172] (0xc0009de370) Reply frame received for 3\nI0511 19:19:15.470126 3013 log.go:172] (0xc0009de370) (0xc0009a0780) Create stream\nI0511 19:19:15.470142 3013 log.go:172] (0xc0009de370) (0xc0009a0780) Stream added, broadcasting: 5\nI0511 19:19:15.470928 3013 log.go:172] (0xc0009de370) Reply frame received for 5\nI0511 19:19:15.538794 3013 log.go:172] (0xc0009de370) Data frame received for 3\nI0511 19:19:15.538834 3013 log.go:172] (0xc000600140) (3) Data frame handling\nI0511 19:19:15.538842 3013 log.go:172] (0xc000600140) (3) Data frame sent\nI0511 19:19:15.538847 3013 log.go:172] (0xc0009de370) Data frame received for 3\nI0511 19:19:15.538853 3013 log.go:172] (0xc000600140) (3) Data frame handling\nI0511 19:19:15.538879 3013 log.go:172] (0xc0009de370) Data frame received for 5\nI0511 19:19:15.538890 3013 log.go:172] (0xc0009a0780) (5) Data frame handling\nI0511 19:19:15.538898 3013 log.go:172] (0xc0009a0780) (5) Data frame sent\nI0511 19:19:15.538906 3013 log.go:172] (0xc0009de370) Data frame received for 5\nI0511 19:19:15.538912 3013 log.go:172] (0xc0009a0780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 19:19:15.539891 3013 log.go:172] (0xc0009de370) Data frame received for 1\nI0511 19:19:15.539908 3013 log.go:172] (0xc0009a06e0) (1) Data frame handling\nI0511 19:19:15.539925 3013 log.go:172] (0xc0009a06e0) (1) Data frame sent\nI0511 19:19:15.539940 3013 log.go:172] (0xc0009de370) (0xc0009a06e0) Stream removed, broadcasting: 1\nI0511 19:19:15.539953 3013 log.go:172] (0xc0009de370) Go away received\nI0511 19:19:15.540203 3013 log.go:172] (0xc0009de370) (0xc0009a06e0) Stream removed, broadcasting: 1\nI0511 19:19:15.540216 3013 log.go:172] (0xc0009de370) (0xc000600140) Stream removed, broadcasting: 3\nI0511 19:19:15.540221 3013 log.go:172] (0xc0009de370) (0xc0009a0780) Stream removed, broadcasting: 5\n" May 11 19:19:15.543: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 19:19:15.543: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 19:19:15.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:19:15.755: INFO: stderr: "I0511 19:19:15.664748 3034 log.go:172] (0xc0007fcb00) (0xc0007eaf00) Create stream\nI0511 19:19:15.664790 3034 log.go:172] (0xc0007fcb00) (0xc0007eaf00) Stream added, broadcasting: 1\nI0511 19:19:15.667597 3034 log.go:172] (0xc0007fcb00) Reply frame received for 1\nI0511 19:19:15.667853 3034 log.go:172] (0xc0007fcb00) (0xc0007e4000) Create stream\nI0511 19:19:15.667865 3034 log.go:172] (0xc0007fcb00) (0xc0007e4000) Stream added, broadcasting: 3\nI0511 19:19:15.668493 3034 log.go:172] (0xc0007fcb00) Reply frame received for 3\nI0511 19:19:15.668521 3034 log.go:172] (0xc0007fcb00) (0xc00030db80) Create stream\nI0511 19:19:15.668529 3034 log.go:172] (0xc0007fcb00) (0xc00030db80) Stream added, broadcasting: 5\nI0511 19:19:15.669347 3034 log.go:172] (0xc0007fcb00) Reply frame received for 5\nI0511 19:19:15.748950 3034 log.go:172] (0xc0007fcb00) Data frame received for 3\nI0511 19:19:15.748977 3034 log.go:172] (0xc0007e4000) (3) Data frame handling\nI0511 19:19:15.748986 3034 log.go:172] (0xc0007e4000) (3) Data frame sent\nI0511 19:19:15.748993 3034 log.go:172] (0xc0007fcb00) Data frame received for 3\nI0511 19:19:15.748998 3034 log.go:172] (0xc0007e4000) (3) Data frame handling\nI0511 19:19:15.749019 3034 log.go:172] (0xc0007fcb00) Data frame received for 5\nI0511 19:19:15.749026 3034 log.go:172] (0xc00030db80) (5) Data frame handling\nI0511 19:19:15.749034 3034 log.go:172] (0xc00030db80) (5) Data frame sent\nI0511 19:19:15.749041 3034 log.go:172] (0xc0007fcb00) Data frame received for 5\nI0511 19:19:15.749047 3034 log.go:172] (0xc00030db80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 19:19:15.750748 3034 log.go:172] (0xc0007fcb00) Data frame received for 1\nI0511 19:19:15.750762 3034 log.go:172] (0xc0007eaf00) (1) Data frame handling\nI0511 19:19:15.750777 3034 log.go:172] (0xc0007eaf00) (1) Data frame sent\nI0511 19:19:15.750791 3034 log.go:172] (0xc0007fcb00) (0xc0007eaf00) Stream removed, broadcasting: 1\nI0511 19:19:15.750833 3034 log.go:172] (0xc0007fcb00) Go away received\nI0511 19:19:15.751113 3034 log.go:172] (0xc0007fcb00) (0xc0007eaf00) Stream removed, broadcasting: 1\nI0511 19:19:15.751126 3034 log.go:172] (0xc0007fcb00) (0xc0007e4000) Stream removed, broadcasting: 3\nI0511 19:19:15.751133 3034 log.go:172] (0xc0007fcb00) (0xc00030db80) Stream removed, broadcasting: 5\n" May 11 19:19:15.755: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 19:19:15.755: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 19:19:15.849: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 19:19:15.849: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 19:19:15.849: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 11 19:19:15.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 19:19:16.034: INFO: stderr: "I0511 19:19:15.970601 3050 log.go:172] (0xc00092e0b0) (0xc0009046e0) Create stream\nI0511 19:19:15.970648 3050 log.go:172] (0xc00092e0b0) (0xc0009046e0) Stream added, broadcasting: 1\nI0511 19:19:15.972667 3050 log.go:172] (0xc00092e0b0) Reply frame received for 1\nI0511 19:19:15.972700 3050 log.go:172] (0xc00092e0b0) (0xc0005ba3c0) Create stream\nI0511 19:19:15.972714 3050 log.go:172] (0xc00092e0b0) (0xc0005ba3c0) Stream added, broadcasting: 3\nI0511 19:19:15.973570 3050 log.go:172] (0xc00092e0b0) Reply frame received for 3\nI0511 19:19:15.973601 3050 log.go:172] (0xc00092e0b0) (0xc000562000) Create stream\nI0511 19:19:15.973612 3050 log.go:172] (0xc00092e0b0) (0xc000562000) Stream added, broadcasting: 5\nI0511 19:19:15.974292 3050 log.go:172] (0xc00092e0b0) Reply frame received for 5\nI0511 19:19:16.027736 3050 log.go:172] (0xc00092e0b0) Data frame received for 5\nI0511 19:19:16.027757 3050 log.go:172] (0xc000562000) (5) Data frame handling\nI0511 19:19:16.027765 3050 log.go:172] (0xc000562000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 19:19:16.027798 3050 log.go:172] (0xc00092e0b0) Data frame received for 3\nI0511 19:19:16.027834 3050 log.go:172] (0xc0005ba3c0) (3) Data frame handling\nI0511 19:19:16.027873 3050 log.go:172] (0xc0005ba3c0) (3) Data frame sent\nI0511 19:19:16.027893 3050 log.go:172] (0xc00092e0b0) Data frame received for 3\nI0511 19:19:16.027906 3050 log.go:172] (0xc0005ba3c0) (3) Data frame handling\nI0511 19:19:16.027924 3050 log.go:172] (0xc00092e0b0) Data frame received for 5\nI0511 19:19:16.027937 3050 log.go:172] (0xc000562000) (5) Data frame handling\nI0511 19:19:16.029099 3050 log.go:172] (0xc00092e0b0) Data frame received for 1\nI0511 19:19:16.029244 3050 log.go:172] (0xc0009046e0) (1) Data frame handling\nI0511 19:19:16.029258 3050 log.go:172] (0xc0009046e0) (1) Data frame sent\nI0511 19:19:16.029273 3050 log.go:172] (0xc00092e0b0) (0xc0009046e0) Stream removed, broadcasting: 1\nI0511 19:19:16.029284 3050 log.go:172] (0xc00092e0b0) Go away received\nI0511 19:19:16.029717 3050 log.go:172] (0xc00092e0b0) (0xc0009046e0) Stream removed, broadcasting: 1\nI0511 19:19:16.029739 3050 log.go:172] (0xc00092e0b0) (0xc0005ba3c0) Stream removed, broadcasting: 3\nI0511 19:19:16.029749 3050 log.go:172] (0xc00092e0b0) (0xc000562000) Stream removed, broadcasting: 5\n" May 11 19:19:16.034: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 19:19:16.034: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 19:19:16.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 19:19:16.957: INFO: stderr: "I0511 19:19:16.640365 3070 log.go:172] (0xc000116fd0) (0xc0006a2aa0) Create stream\nI0511 19:19:16.640426 3070 log.go:172] (0xc000116fd0) (0xc0006a2aa0) Stream added, broadcasting: 1\nI0511 19:19:16.643223 3070 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0511 19:19:16.643265 3070 log.go:172] (0xc000116fd0) (0xc0003179a0) Create stream\nI0511 19:19:16.643276 3070 log.go:172] (0xc000116fd0) (0xc0003179a0) Stream added, broadcasting: 3\nI0511 19:19:16.644201 3070 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0511 19:19:16.644241 3070 log.go:172] (0xc000116fd0) (0xc0008b6000) Create stream\nI0511 19:19:16.644254 3070 log.go:172] (0xc000116fd0) (0xc0008b6000) Stream added, broadcasting: 5\nI0511 19:19:16.644854 3070 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0511 19:19:16.706516 3070 log.go:172] (0xc000116fd0) Data frame received for 5\nI0511 19:19:16.706694 3070 log.go:172] (0xc0008b6000) (5) Data frame handling\nI0511 19:19:16.706723 3070 log.go:172] (0xc0008b6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 19:19:16.950921 3070 log.go:172] (0xc000116fd0) Data frame received for 5\nI0511 19:19:16.950967 3070 log.go:172] (0xc0008b6000) (5) Data frame handling\nI0511 19:19:16.950998 3070 log.go:172] (0xc000116fd0) Data frame received for 3\nI0511 19:19:16.951011 3070 log.go:172] (0xc0003179a0) (3) Data frame handling\nI0511 19:19:16.951037 3070 log.go:172] (0xc0003179a0) (3) Data frame sent\nI0511 19:19:16.951063 3070 log.go:172] (0xc000116fd0) Data frame received for 3\nI0511 19:19:16.951077 3070 log.go:172] (0xc0003179a0) (3) Data frame handling\nI0511 19:19:16.952657 3070 log.go:172] (0xc000116fd0) Data frame received for 1\nI0511 19:19:16.952693 3070 log.go:172] (0xc0006a2aa0) (1) Data frame handling\nI0511 19:19:16.952719 3070 log.go:172] (0xc0006a2aa0) (1) Data frame sent\nI0511 19:19:16.952742 3070 log.go:172] (0xc000116fd0) (0xc0006a2aa0) Stream removed, broadcasting: 1\nI0511 19:19:16.952758 3070 log.go:172] (0xc000116fd0) Go away received\nI0511 19:19:16.953089 3070 log.go:172] (0xc000116fd0) (0xc0006a2aa0) Stream removed, broadcasting: 1\nI0511 19:19:16.953303 3070 log.go:172] (0xc000116fd0) (0xc0003179a0) Stream removed, broadcasting: 3\nI0511 19:19:16.953320 3070 log.go:172] (0xc000116fd0) (0xc0008b6000) Stream removed, broadcasting: 5\n" May 11 19:19:16.957: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 19:19:16.957: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 19:19:16.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 19:19:17.564: INFO: stderr: "I0511 19:19:17.248168 3090 log.go:172] (0xc000ae2370) (0xc0008c45a0) Create stream\nI0511 19:19:17.248260 3090 log.go:172] (0xc000ae2370) (0xc0008c45a0) Stream added, broadcasting: 1\nI0511 19:19:17.251572 3090 log.go:172] (0xc000ae2370) Reply frame received for 1\nI0511 19:19:17.251615 3090 log.go:172] (0xc000ae2370) (0xc0008c4640) Create stream\nI0511 19:19:17.251626 3090 log.go:172] (0xc000ae2370) (0xc0008c4640) Stream added, broadcasting: 3\nI0511 19:19:17.252548 3090 log.go:172] (0xc000ae2370) Reply frame received for 3\nI0511 19:19:17.252580 3090 log.go:172] (0xc000ae2370) (0xc000988000) Create stream\nI0511 19:19:17.252592 3090 log.go:172] (0xc000ae2370) (0xc000988000) Stream added, broadcasting: 5\nI0511 19:19:17.253442 3090 log.go:172] (0xc000ae2370) Reply frame received for 5\nI0511 19:19:17.309663 3090 log.go:172] (0xc000ae2370) Data frame received for 5\nI0511 19:19:17.309688 3090 log.go:172] (0xc000988000) (5) Data frame handling\nI0511 19:19:17.309711 3090 log.go:172] (0xc000988000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 19:19:17.558640 3090 log.go:172] (0xc000ae2370) Data frame received for 5\nI0511 19:19:17.558667 3090 log.go:172] (0xc000988000) (5) Data frame handling\nI0511 19:19:17.558690 3090 log.go:172] (0xc000ae2370) Data frame received for 3\nI0511 19:19:17.558700 3090 log.go:172] (0xc0008c4640) (3) Data frame handling\nI0511 19:19:17.558706 3090 log.go:172] (0xc0008c4640) (3) Data frame sent\nI0511 19:19:17.558711 3090 log.go:172] (0xc000ae2370) Data frame received for 3\nI0511 19:19:17.558716 3090 log.go:172] (0xc0008c4640) (3) Data frame handling\nI0511 19:19:17.559979 3090 log.go:172] (0xc000ae2370) Data frame received for 1\nI0511 19:19:17.559992 3090 log.go:172] (0xc0008c45a0) (1) Data frame handling\nI0511 19:19:17.560005 3090 log.go:172] (0xc0008c45a0) (1) Data frame sent\nI0511 19:19:17.560166 3090 log.go:172] (0xc000ae2370) (0xc0008c45a0) Stream removed, broadcasting: 1\nI0511 19:19:17.560184 3090 log.go:172] (0xc000ae2370) Go away received\nI0511 19:19:17.560604 3090 log.go:172] (0xc000ae2370) (0xc0008c45a0) Stream removed, broadcasting: 1\nI0511 19:19:17.560617 3090 log.go:172] (0xc000ae2370) (0xc0008c4640) Stream removed, broadcasting: 3\nI0511 19:19:17.560622 3090 log.go:172] (0xc000ae2370) (0xc000988000) Stream removed, broadcasting: 5\n" May 11 19:19:17.564: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 19:19:17.564: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 19:19:17.564: INFO: Waiting for statefulset status.replicas updated to 0 May 11 19:19:17.567: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 11 19:19:27.634: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 19:19:27.634: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 19:19:27.634: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 19:19:27.849: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:19:27.849: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:16 +0000 UTC }] May 11 19:19:27.849: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:03 +0000 UTC }] May 11 19:19:27.849: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC }] May 11 19:19:27.849: INFO: May 11 19:19:27.849: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:19:29.388: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:19:29.388: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:16 +0000 UTC }] May 11 19:19:29.388: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:03 +0000 UTC }] May 11 19:19:29.388: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC }] May 11 19:19:29.388: INFO: May 11 19:19:29.388: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:19:30.533: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:19:30.533: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:16 +0000 UTC }] May 11 19:19:30.533: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:03 +0000 UTC }] May 11 19:19:30.533: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC }] May 11 19:19:30.533: INFO: May 11 19:19:30.533: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:19:31.587: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:19:31.587: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:16 +0000 UTC }] May 11 19:19:31.587: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:03 +0000 UTC }] May 11 19:19:31.587: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC }] May 11 19:19:31.587: INFO: May 11 19:19:31.587: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:19:32.663: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:19:32.663: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:16 +0000 UTC }] May 11 19:19:32.663: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:03 +0000 UTC }] May 11 19:19:32.663: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC }] May 11 19:19:32.663: INFO: May 11 19:19:32.663: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:19:33.666: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:19:33.666: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:16 +0000 UTC }] May 11 19:19:33.666: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:03 +0000 UTC }] May 11 19:19:33.666: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC }] May 11 19:19:33.666: INFO: May 11 19:19:33.666: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:19:34.671: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:19:34.671: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:16 +0000 UTC }] May 11 19:19:34.671: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:03 +0000 UTC }] May 11 19:19:34.671: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC }] May 11 19:19:34.671: INFO: May 11 19:19:34.671: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:19:35.674: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:19:35.674: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:16 +0000 UTC }] May 11 19:19:35.674: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:03 +0000 UTC }] May 11 19:19:35.674: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC }] May 11 19:19:35.674: INFO: May 11 19:19:35.674: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:19:36.678: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:19:36.678: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:18:16 +0000 UTC }] May 11 19:19:36.678: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:03 +0000 UTC }] May 11 19:19:36.678: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:19:04 +0000 UTC }] May 11 19:19:36.678: INFO: May 11 19:19:36.678: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7461 May 11 19:19:37.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:19:37.799: INFO: rc: 1 May 11 19:19:37.799: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00227a120 exit status 1 true [0xc0026b6800 0xc0026b6818 0xc0026b6830] [0xc0026b6800 0xc0026b6818 0xc0026b6830] [0xc0026b6810 0xc0026b6828] [0xba70e0 0xba70e0] 0xc0017bc5a0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 11 19:19:47.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:19:47.957: INFO: rc: 1 May 11 19:19:47.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030b8090 exit status 1 true [0xc000010710 0xc000010c58 0xc000010dd0] [0xc000010710 0xc000010c58 0xc000010dd0] [0xc000010b88 0xc000010d90] [0xba70e0 0xba70e0] 0xc0029c46c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:19:57.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:19:58.044: INFO: rc: 1 May 11 19:19:58.044: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f9c090 exit status 1 true [0xc0009681f8 0xc0009684e0 0xc000968640] [0xc0009681f8 0xc0009684e0 0xc000968640] [0xc0009684c0 0xc000968620] [0xba70e0 0xba70e0] 0xc001c84480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:20:08.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:20:08.138: INFO: rc: 1 May 11 19:20:08.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030b8180 exit status 1 true [0xc000010ef8 0xc000011558 0xc000011ad0] [0xc000010ef8 0xc000011558 0xc000011ad0] [0xc0000111a0 0xc000011940] [0xba70e0 0xba70e0] 0xc0029c54a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:20:18.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:20:18.516: INFO: rc: 1 May 11 19:20:18.516: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030c8090 exit status 1 true [0xc000e00000 0xc000e00018 0xc000e00030] [0xc000e00000 0xc000e00018 0xc000e00030] [0xc000e00010 0xc000e00028] [0xba70e0 0xba70e0] 0xc00270a2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:20:28.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:20:29.114: INFO: rc: 1 May 11 19:20:29.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030c8150 exit status 1 true [0xc000e00038 0xc000e00050 0xc000e00068] [0xc000e00038 0xc000e00050 0xc000e00068] [0xc000e00048 0xc000e00060] [0xba70e0 0xba70e0] 0xc00270a600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:20:39.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:20:39.240: INFO: rc: 1 May 11 19:20:39.240: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030c8210 exit status 1 true [0xc000e00070 0xc000e00088 0xc000e000a0] [0xc000e00070 0xc000e00088 0xc000e000a0] [0xc000e00080 0xc000e00098] [0xba70e0 0xba70e0] 0xc00270a900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:20:49.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:20:49.363: INFO: rc: 1 May 11 19:20:49.363: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f9c150 exit status 1 true [0xc000968690 0xc000968d68 0xc000968ee8] [0xc000968690 0xc000968d68 0xc000968ee8] [0xc000968d48 0xc000968dc0] [0xba70e0 0xba70e0] 0xc001c84ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:20:59.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:21:00.143: INFO: rc: 1 May 11 19:21:00.143: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030b8270 exit status 1 true [0xc000011c68 0xc000011d90 0xc00112e020] [0xc000011c68 0xc000011d90 0xc00112e020] [0xc000011d78 0xc000011f58] [0xba70e0 0xba70e0] 0xc0029c5c20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:21:10.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:21:10.239: INFO: rc: 1 May 11 19:21:10.239: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030c8330 exit status 1 true [0xc000e000a8 0xc000e000c0 0xc000e000d8] [0xc000e000a8 0xc000e000c0 0xc000e000d8] [0xc000e000b8 0xc000e000d0] [0xba70e0 0xba70e0] 0xc00270aea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:21:20.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:21:20.339: INFO: rc: 1 May 11 19:21:20.339: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030c83f0 exit status 1 true [0xc000e000e0 0xc000e000f8 0xc000e00110] [0xc000e000e0 0xc000e000f8 0xc000e00110] [0xc000e000f0 0xc000e00108] [0xba70e0 0xba70e0] 0xc00270b560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:21:30.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:21:30.432: INFO: rc: 1 May 11 19:21:30.432: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f9c270 exit status 1 true [0xc000968f10 0xc000968fc0 0xc000969088] [0xc000968f10 0xc000968fc0 0xc000969088] [0xc000968fa8 0xc000969070] [0xba70e0 0xba70e0] 0xc001c85980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:21:40.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:21:40.707: INFO: rc: 1 May 11 19:21:40.707: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f9c360 exit status 1 true [0xc000969128 0xc000969258 0xc0009692e0] [0xc000969128 0xc000969258 0xc0009692e0] [0xc000969230 0xc0009692c0] [0xba70e0 0xba70e0] 0xc00253c120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:21:50.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:21:50.811: INFO: rc: 1 May 11 19:21:50.811: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003092090 exit status 1 true [0xc000010940 0xc000010cd8 0xc000010ef8] [0xc000010940 0xc000010cd8 0xc000010ef8] [0xc000010c58 0xc000010dd0] [0xba70e0 0xba70e0] 0xc001c84480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:22:00.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:22:01.009: INFO: rc: 1 May 11 19:22:01.009: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030c80c0 exit status 1 true [0xc00052e360 0xc000e00010 0xc000e00028] [0xc00052e360 0xc000e00010 0xc000e00028] [0xc000e00008 0xc000e00020] [0xba70e0 0xba70e0] 0xc00270a2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:22:11.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:22:11.117: INFO: rc: 1 May 11 19:22:11.117: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030921b0 exit status 1 true [0xc000011170 0xc0000118b8 0xc000011c68] [0xc000011170 0xc0000118b8 0xc000011c68] [0xc000011558 0xc000011ad0] [0xba70e0 0xba70e0] 0xc001c84ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:22:21.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:22:21.384: INFO: rc: 1 May 11 19:22:21.384: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030922a0 exit status 1 true [0xc000011d48 0xc000011ea8 0xc000968228] [0xc000011d48 0xc000011ea8 0xc000968228] [0xc000011d90 0xc0009681f8] [0xba70e0 0xba70e0] 0xc001c85980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:22:31.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:22:31.480: INFO: rc: 1 May 11 19:22:31.480: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003092390 exit status 1 true [0xc0009684c0 0xc000968620 0xc000968c30] [0xc0009684c0 0xc000968620 0xc000968c30] [0xc000968580 0xc000968690] [0xba70e0 0xba70e0] 0xc00253c120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:22:41.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:22:41.578: INFO: rc: 1 May 11 19:22:41.578: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003092450 exit status 1 true [0xc000968d48 0xc000968dc0 0xc000968f88] [0xc000968d48 0xc000968dc0 0xc000968f88] [0xc000968d80 0xc000968f10] [0xba70e0 0xba70e0] 0xc00253cc00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:22:51.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:22:51.676: INFO: rc: 1 May 11 19:22:51.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003092540 exit status 1 true [0xc000968fa8 0xc000969070 0xc0009691d0] [0xc000968fa8 0xc000969070 0xc0009691d0] [0xc000969020 0xc000969128] [0xba70e0 0xba70e0] 0xc00253de00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:23:01.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:23:01.776: INFO: rc: 1 May 11 19:23:01.776: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030c81b0 exit status 1 true [0xc000e00030 0xc000e00048 0xc000e00060] [0xc000e00030 0xc000e00048 0xc000e00060] [0xc000e00040 0xc000e00058] [0xba70e0 0xba70e0] 0xc00270a600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:23:11.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:23:11.875: INFO: rc: 1 May 11 19:23:11.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003092630 exit status 1 true [0xc000969230 0xc0009692c0 0xc000969320] [0xc000969230 0xc0009692c0 0xc000969320] [0xc0009692b8 0xc0009692f8] [0xba70e0 0xba70e0] 0xc0029c4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:23:21.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:23:21.975: INFO: rc: 1 May 11 19:23:21.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f9c120 exit status 1 true [0xc00112e020 0xc00112e160 0xc00112e230] [0xc00112e020 0xc00112e160 0xc00112e230] [0xc00112e130 0xc00112e198] [0xba70e0 0xba70e0] 0xc002008000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:23:31.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:23:33.447: INFO: rc: 1 May 11 19:23:33.447: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f9c210 exit status 1 true [0xc00112e398 0xc00112e7e8 0xc00112e970] [0xc00112e398 0xc00112e7e8 0xc00112e970] [0xc00112e5e8 0xc00112e920] [0xba70e0 0xba70e0] 0xc002008300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:23:43.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:23:43.558: INFO: rc: 1 May 11 19:23:43.558: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003092720 exit status 1 true [0xc000969498 0xc0009696d0 0xc000969760] [0xc000969498 0xc0009696d0 0xc000969760] [0xc0009695e8 0xc000969730] [0xba70e0 0xba70e0] 0xc0029c5320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:23:53.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:23:53.720: INFO: rc: 1 May 11 19:23:53.720: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030c80f0 exit status 1 true [0xc00052e360 0xc000010b88 0xc000010d90] [0xc00052e360 0xc000010b88 0xc000010d90] [0xc000010940 0xc000010cd8] [0xba70e0 0xba70e0] 0xc00253c660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:24:03.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:24:03.813: INFO: rc: 1 May 11 19:24:03.813: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030920f0 exit status 1 true [0xc000e00000 0xc000e00018 0xc000e00030] [0xc000e00000 0xc000e00018 0xc000e00030] [0xc000e00010 0xc000e00028] [0xba70e0 0xba70e0] 0xc001c84480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:24:13.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:24:13.919: INFO: rc: 1 May 11 19:24:13.919: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030c8210 exit status 1 true [0xc000010dd0 0xc0000111a0 0xc000011940] [0xc000010dd0 0xc0000111a0 0xc000011940] [0xc000011170 0xc0000118b8] [0xba70e0 0xba70e0] 0xc00253d740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:24:23.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:24:24.005: INFO: rc: 1 May 11 19:24:24.005: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030921e0 exit status 1 true [0xc000e00038 0xc000e00050 0xc000e00068] [0xc000e00038 0xc000e00050 0xc000e00068] [0xc000e00048 0xc000e00060] [0xba70e0 0xba70e0] 0xc001c84ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:24:34.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:24:34.106: INFO: rc: 1 May 11 19:24:34.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f9c0c0 exit status 1 true [0xc0009681f8 0xc0009684e0 0xc000968640] [0xc0009681f8 0xc0009684e0 0xc000968640] [0xc0009684c0 0xc000968620] [0xba70e0 0xba70e0] 0xc00270a000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:24:44.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7461 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:24:44.204: INFO: rc: 1 May 11 19:24:44.204: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 11 19:24:44.204: INFO: Scaling statefulset ss to 0 May 11 19:24:44.211: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 11 19:24:44.213: INFO: Deleting all statefulset in ns statefulset-7461 May 11 19:24:44.214: INFO: Scaling statefulset ss to 0 May 11 19:24:44.221: INFO: Waiting for statefulset status.replicas updated to 0 May 11 19:24:44.223: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:24:44.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7461" for this suite. May 11 19:24:50.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:24:50.413: INFO: namespace statefulset-7461 deletion completed in 6.151797391s • [SLOW TEST:394.187 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:24:50.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0511 19:25:02.132571 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 19:25:02.132: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:25:02.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6096" for this suite. May 11 19:25:10.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:25:10.435: INFO: namespace gc-6096 deletion completed in 8.299650895s • [SLOW TEST:20.022 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:25:10.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-qdh7 STEP: Creating a pod to test atomic-volume-subpath May 11 19:25:10.837: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qdh7" in namespace "subpath-6614" to be "success or failure" May 11 19:25:10.919: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Pending", Reason="", readiness=false. Elapsed: 81.561221ms May 11 19:25:12.924: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086233498s May 11 19:25:14.927: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Running", Reason="", readiness=true. Elapsed: 4.089880177s May 11 19:25:16.934: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Running", Reason="", readiness=true. Elapsed: 6.096606011s May 11 19:25:18.937: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Running", Reason="", readiness=true. Elapsed: 8.099891745s May 11 19:25:20.941: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Running", Reason="", readiness=true. Elapsed: 10.103784923s May 11 19:25:22.944: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Running", Reason="", readiness=true. Elapsed: 12.106916761s May 11 19:25:25.081: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Running", Reason="", readiness=true. Elapsed: 14.243632243s May 11 19:25:27.084: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Running", Reason="", readiness=true. Elapsed: 16.246344892s May 11 19:25:29.088: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Running", Reason="", readiness=true. Elapsed: 18.251137161s May 11 19:25:31.189: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Running", Reason="", readiness=true. Elapsed: 20.351221939s May 11 19:25:33.192: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Running", Reason="", readiness=true. Elapsed: 22.354834237s May 11 19:25:35.286: INFO: Pod "pod-subpath-test-configmap-qdh7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.448450529s STEP: Saw pod success May 11 19:25:35.286: INFO: Pod "pod-subpath-test-configmap-qdh7" satisfied condition "success or failure" May 11 19:25:35.288: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-qdh7 container test-container-subpath-configmap-qdh7: STEP: delete the pod May 11 19:25:35.627: INFO: Waiting for pod pod-subpath-test-configmap-qdh7 to disappear May 11 19:25:36.021: INFO: Pod pod-subpath-test-configmap-qdh7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-qdh7 May 11 19:25:36.021: INFO: Deleting pod "pod-subpath-test-configmap-qdh7" in namespace "subpath-6614" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:25:36.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6614" for this suite. May 11 19:25:46.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:25:46.939: INFO: namespace subpath-6614 deletion completed in 10.912835423s • [SLOW TEST:36.503 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:25:46.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-737/configmap-test-d1db9091-968d-471c-aa44-2a11c58b4c78 STEP: Creating a pod to test consume configMaps May 11 19:25:47.634: INFO: Waiting up to 5m0s for pod "pod-configmaps-a3a19b51-bd81-4754-a329-8677f6e2bdab" in namespace "configmap-737" to be "success or failure" May 11 19:25:47.689: INFO: Pod "pod-configmaps-a3a19b51-bd81-4754-a329-8677f6e2bdab": Phase="Pending", Reason="", readiness=false. Elapsed: 55.280207ms May 11 19:25:49.692: INFO: Pod "pod-configmaps-a3a19b51-bd81-4754-a329-8677f6e2bdab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057868768s May 11 19:25:51.695: INFO: Pod "pod-configmaps-a3a19b51-bd81-4754-a329-8677f6e2bdab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060951341s May 11 19:25:53.699: INFO: Pod "pod-configmaps-a3a19b51-bd81-4754-a329-8677f6e2bdab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065090766s STEP: Saw pod success May 11 19:25:53.699: INFO: Pod "pod-configmaps-a3a19b51-bd81-4754-a329-8677f6e2bdab" satisfied condition "success or failure" May 11 19:25:53.702: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a3a19b51-bd81-4754-a329-8677f6e2bdab container env-test: STEP: delete the pod May 11 19:25:53.903: INFO: Waiting for pod pod-configmaps-a3a19b51-bd81-4754-a329-8677f6e2bdab to disappear May 11 19:25:53.934: INFO: Pod pod-configmaps-a3a19b51-bd81-4754-a329-8677f6e2bdab no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:25:53.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-737" for this suite. May 11 19:25:59.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:26:00.075: INFO: namespace configmap-737 deletion completed in 6.137216265s • [SLOW TEST:13.135 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:26:00.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 19:26:07.838: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c545f0b6-a4ca-461c-9821-c56e6022a08a" May 11 19:26:07.838: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c545f0b6-a4ca-461c-9821-c56e6022a08a" in namespace "pods-9222" to be "terminated due to deadline exceeded" May 11 19:26:07.897: INFO: Pod "pod-update-activedeadlineseconds-c545f0b6-a4ca-461c-9821-c56e6022a08a": Phase="Running", Reason="", readiness=true. Elapsed: 59.154482ms May 11 19:26:10.346: INFO: Pod "pod-update-activedeadlineseconds-c545f0b6-a4ca-461c-9821-c56e6022a08a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.507717685s May 11 19:26:10.346: INFO: Pod "pod-update-activedeadlineseconds-c545f0b6-a4ca-461c-9821-c56e6022a08a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:26:10.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9222" for this suite. May 11 19:26:16.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:26:17.717: INFO: namespace pods-9222 deletion completed in 7.011831644s • [SLOW TEST:17.642 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:26:17.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 19:26:17.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce5f4b44-ab0a-4df4-86cf-8efff4abc646" in namespace "downward-api-6729" to be "success or failure" May 11 19:26:17.939: INFO: Pod "downwardapi-volume-ce5f4b44-ab0a-4df4-86cf-8efff4abc646": Phase="Pending", Reason="", readiness=false. Elapsed: 16.16946ms May 11 19:26:19.974: INFO: Pod "downwardapi-volume-ce5f4b44-ab0a-4df4-86cf-8efff4abc646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050625297s May 11 19:26:22.015: INFO: Pod "downwardapi-volume-ce5f4b44-ab0a-4df4-86cf-8efff4abc646": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09218272s May 11 19:26:24.130: INFO: Pod "downwardapi-volume-ce5f4b44-ab0a-4df4-86cf-8efff4abc646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206353967s STEP: Saw pod success May 11 19:26:24.130: INFO: Pod "downwardapi-volume-ce5f4b44-ab0a-4df4-86cf-8efff4abc646" satisfied condition "success or failure" May 11 19:26:24.132: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ce5f4b44-ab0a-4df4-86cf-8efff4abc646 container client-container: STEP: delete the pod May 11 19:26:24.304: INFO: Waiting for pod downwardapi-volume-ce5f4b44-ab0a-4df4-86cf-8efff4abc646 to disappear May 11 19:26:24.371: INFO: Pod downwardapi-volume-ce5f4b44-ab0a-4df4-86cf-8efff4abc646 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:26:24.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6729" for this suite. May 11 19:26:30.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:26:30.658: INFO: namespace downward-api-6729 deletion completed in 6.283865973s • [SLOW TEST:12.940 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:26:30.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 19:26:30.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e62831c6-dd3a-44b4-9883-b08f5c6f58eb" in namespace "projected-6915" to be "success or failure" May 11 19:26:30.899: INFO: Pod "downwardapi-volume-e62831c6-dd3a-44b4-9883-b08f5c6f58eb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.219178ms May 11 19:26:32.902: INFO: Pod "downwardapi-volume-e62831c6-dd3a-44b4-9883-b08f5c6f58eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037656149s May 11 19:26:34.906: INFO: Pod "downwardapi-volume-e62831c6-dd3a-44b4-9883-b08f5c6f58eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041534073s May 11 19:26:36.986: INFO: Pod "downwardapi-volume-e62831c6-dd3a-44b4-9883-b08f5c6f58eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121330508s STEP: Saw pod success May 11 19:26:36.986: INFO: Pod "downwardapi-volume-e62831c6-dd3a-44b4-9883-b08f5c6f58eb" satisfied condition "success or failure" May 11 19:26:36.989: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e62831c6-dd3a-44b4-9883-b08f5c6f58eb container client-container: STEP: delete the pod May 11 19:26:37.168: INFO: Waiting for pod downwardapi-volume-e62831c6-dd3a-44b4-9883-b08f5c6f58eb to disappear May 11 19:26:37.216: INFO: Pod downwardapi-volume-e62831c6-dd3a-44b4-9883-b08f5c6f58eb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:26:37.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6915" for this suite. May 11 19:26:43.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:26:44.145: INFO: namespace projected-6915 deletion completed in 6.925715439s • [SLOW TEST:13.487 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:26:44.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 19:26:44.708: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 11 19:26:49.758: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 19:26:51.770: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 11 19:26:51.964: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-3323,SelfLink:/apis/apps/v1/namespaces/deployment-3323/deployments/test-cleanup-deployment,UID:dbb4ecae-1c04-404b-9cc6-374ee4ef87d7,ResourceVersion:10311370,Generation:1,CreationTimestamp:2020-05-11 19:26:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 11 19:26:52.101: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-3323,SelfLink:/apis/apps/v1/namespaces/deployment-3323/replicasets/test-cleanup-deployment-55bbcbc84c,UID:6c98dc9b-699f-4906-ad62-3ff70678f5b9,ResourceVersion:10311373,Generation:1,CreationTimestamp:2020-05-11 19:26:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment dbb4ecae-1c04-404b-9cc6-374ee4ef87d7 0xc002c7cff7 0xc002c7cff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 19:26:52.101: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 11 19:26:52.101: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-3323,SelfLink:/apis/apps/v1/namespaces/deployment-3323/replicasets/test-cleanup-controller,UID:7a409dca-123b-411d-adf4-b2987d33c47c,ResourceVersion:10311371,Generation:1,CreationTimestamp:2020-05-11 19:26:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment dbb4ecae-1c04-404b-9cc6-374ee4ef87d7 0xc002c7cf27 0xc002c7cf28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 11 19:26:52.151: INFO: Pod "test-cleanup-controller-f2jbn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-f2jbn,GenerateName:test-cleanup-controller-,Namespace:deployment-3323,SelfLink:/api/v1/namespaces/deployment-3323/pods/test-cleanup-controller-f2jbn,UID:3d87a69f-4a42-4869-9ab4-203e7d68f731,ResourceVersion:10311365,Generation:0,CreationTimestamp:2020-05-11 19:26:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 7a409dca-123b-411d-adf4-b2987d33c47c 0xc002c7d8e7 0xc002c7d8e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r2tkq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r2tkq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r2tkq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c7d960} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c7d980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:26:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:26:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:26:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:26:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.229,StartTime:2020-05-11 19:26:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 19:26:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://dd15759c23fad566ebb2370d88e824b7dc14dc519546fae75e1c82706236de06}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 19:26:52.151: INFO: Pod "test-cleanup-deployment-55bbcbc84c-f4khq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-f4khq,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-3323,SelfLink:/api/v1/namespaces/deployment-3323/pods/test-cleanup-deployment-55bbcbc84c-f4khq,UID:f3b03db0-2d27-4a60-b4e6-4ef9ba59fbde,ResourceVersion:10311377,Generation:0,CreationTimestamp:2020-05-11 19:26:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 6c98dc9b-699f-4906-ad62-3ff70678f5b9 0xc002c7da87 0xc002c7da88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r2tkq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r2tkq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-r2tkq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c7db00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c7db20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:26:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:26:52.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3323" for this suite. May 11 19:27:02.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:27:02.518: INFO: namespace deployment-3323 deletion completed in 10.337410856s • [SLOW TEST:18.373 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:27:02.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 11 19:27:02.769: INFO: Waiting up to 5m0s for pod "downward-api-52f4273d-1e98-4156-b97b-b69331c0bbdf" in namespace "downward-api-1316" to be "success or failure" May 11 19:27:02.791: INFO: Pod "downward-api-52f4273d-1e98-4156-b97b-b69331c0bbdf": Phase="Pending", Reason="", readiness=false. Elapsed: 21.816948ms May 11 19:27:04.810: INFO: Pod "downward-api-52f4273d-1e98-4156-b97b-b69331c0bbdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040289884s May 11 19:27:06.813: INFO: Pod "downward-api-52f4273d-1e98-4156-b97b-b69331c0bbdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043508598s STEP: Saw pod success May 11 19:27:06.813: INFO: Pod "downward-api-52f4273d-1e98-4156-b97b-b69331c0bbdf" satisfied condition "success or failure" May 11 19:27:06.815: INFO: Trying to get logs from node iruya-worker pod downward-api-52f4273d-1e98-4156-b97b-b69331c0bbdf container dapi-container: STEP: delete the pod May 11 19:27:06.848: INFO: Waiting for pod downward-api-52f4273d-1e98-4156-b97b-b69331c0bbdf to disappear May 11 19:27:06.932: INFO: Pod downward-api-52f4273d-1e98-4156-b97b-b69331c0bbdf no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:27:06.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1316" for this suite. May 11 19:27:12.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:27:13.134: INFO: namespace downward-api-1316 deletion completed in 6.197534059s • [SLOW TEST:10.615 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:27:13.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-2411e558-a1a5-4be2-acb4-e7358a01eae6 STEP: Creating a pod to test consume configMaps May 11 19:27:14.338: INFO: Waiting up to 5m0s for pod "pod-configmaps-57dda937-a42e-4f54-9314-70467b7a7975" in namespace "configmap-5455" to be "success or failure" May 11 19:27:14.777: INFO: Pod "pod-configmaps-57dda937-a42e-4f54-9314-70467b7a7975": Phase="Pending", Reason="", readiness=false. Elapsed: 439.209776ms May 11 19:27:16.782: INFO: Pod "pod-configmaps-57dda937-a42e-4f54-9314-70467b7a7975": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443432747s May 11 19:27:18.785: INFO: Pod "pod-configmaps-57dda937-a42e-4f54-9314-70467b7a7975": Phase="Pending", Reason="", readiness=false. Elapsed: 4.446404141s May 11 19:27:20.788: INFO: Pod "pod-configmaps-57dda937-a42e-4f54-9314-70467b7a7975": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450288007s May 11 19:27:22.856: INFO: Pod "pod-configmaps-57dda937-a42e-4f54-9314-70467b7a7975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.518225294s STEP: Saw pod success May 11 19:27:22.856: INFO: Pod "pod-configmaps-57dda937-a42e-4f54-9314-70467b7a7975" satisfied condition "success or failure" May 11 19:27:22.860: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-57dda937-a42e-4f54-9314-70467b7a7975 container configmap-volume-test: STEP: delete the pod May 11 19:27:22.938: INFO: Waiting for pod pod-configmaps-57dda937-a42e-4f54-9314-70467b7a7975 to disappear May 11 19:27:23.118: INFO: Pod pod-configmaps-57dda937-a42e-4f54-9314-70467b7a7975 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:27:23.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5455" for this suite. May 11 19:27:31.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:27:31.445: INFO: namespace configmap-5455 deletion completed in 8.322861393s • [SLOW TEST:18.311 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:27:31.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-5ac6a459-8fd2-4bb0-aff7-31f98dd7b6ab STEP: Creating configMap with name cm-test-opt-upd-b0fa3e77-a99e-41f7-9a47-20a55b3dfe5f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5ac6a459-8fd2-4bb0-aff7-31f98dd7b6ab STEP: Updating configmap cm-test-opt-upd-b0fa3e77-a99e-41f7-9a47-20a55b3dfe5f STEP: Creating configMap with name cm-test-opt-create-a93e526b-d192-4278-8519-eaf98051179c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:27:44.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2349" for this suite. May 11 19:28:06.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:28:06.754: INFO: namespace projected-2349 deletion completed in 22.195346453s • [SLOW TEST:35.309 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:28:06.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-fb113435-0544-419a-8f4f-7f741dee1ec5 STEP: Creating a pod to test consume secrets May 11 19:28:06.937: INFO: Waiting up to 5m0s for pod "pod-secrets-8f720bf9-2ad4-4b53-bee9-d6b7dfd98e81" in namespace "secrets-4545" to be "success or failure" May 11 19:28:06.943: INFO: Pod "pod-secrets-8f720bf9-2ad4-4b53-bee9-d6b7dfd98e81": Phase="Pending", Reason="", readiness=false. Elapsed: 5.702445ms May 11 19:28:09.154: INFO: Pod "pod-secrets-8f720bf9-2ad4-4b53-bee9-d6b7dfd98e81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216546075s May 11 19:28:11.158: INFO: Pod "pod-secrets-8f720bf9-2ad4-4b53-bee9-d6b7dfd98e81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220790336s May 11 19:28:13.305: INFO: Pod "pod-secrets-8f720bf9-2ad4-4b53-bee9-d6b7dfd98e81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.36791079s STEP: Saw pod success May 11 19:28:13.305: INFO: Pod "pod-secrets-8f720bf9-2ad4-4b53-bee9-d6b7dfd98e81" satisfied condition "success or failure" May 11 19:28:13.307: INFO: Trying to get logs from node iruya-worker pod pod-secrets-8f720bf9-2ad4-4b53-bee9-d6b7dfd98e81 container secret-volume-test: STEP: delete the pod May 11 19:28:13.554: INFO: Waiting for pod pod-secrets-8f720bf9-2ad4-4b53-bee9-d6b7dfd98e81 to disappear May 11 19:28:13.816: INFO: Pod pod-secrets-8f720bf9-2ad4-4b53-bee9-d6b7dfd98e81 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:28:13.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4545" for this suite. May 11 19:28:20.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:28:20.089: INFO: namespace secrets-4545 deletion completed in 6.269188234s • [SLOW TEST:13.335 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:28:20.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 11 19:28:26.877: INFO: Successfully updated pod "labelsupdate24dd4de2-7fea-4b9a-9002-b26d92ea0ee2" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:28:28.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7250" for this suite. May 11 19:28:50.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:28:51.007: INFO: namespace downward-api-7250 deletion completed in 22.080757109s • [SLOW TEST:30.917 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:28:51.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 11 19:28:51.346: INFO: Waiting up to 5m0s for pod "downward-api-a452e94d-5aec-4997-971a-1b1b83cbe0bd" in namespace "downward-api-3788" to be "success or failure" May 11 19:28:51.363: INFO: Pod "downward-api-a452e94d-5aec-4997-971a-1b1b83cbe0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.622963ms May 11 19:28:53.367: INFO: Pod "downward-api-a452e94d-5aec-4997-971a-1b1b83cbe0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02062772s May 11 19:28:55.370: INFO: Pod "downward-api-a452e94d-5aec-4997-971a-1b1b83cbe0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023685124s May 11 19:28:57.575: INFO: Pod "downward-api-a452e94d-5aec-4997-971a-1b1b83cbe0bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.229219161s STEP: Saw pod success May 11 19:28:57.575: INFO: Pod "downward-api-a452e94d-5aec-4997-971a-1b1b83cbe0bd" satisfied condition "success or failure" May 11 19:28:57.578: INFO: Trying to get logs from node iruya-worker pod downward-api-a452e94d-5aec-4997-971a-1b1b83cbe0bd container dapi-container: STEP: delete the pod May 11 19:28:57.640: INFO: Waiting for pod downward-api-a452e94d-5aec-4997-971a-1b1b83cbe0bd to disappear May 11 19:28:57.910: INFO: Pod downward-api-a452e94d-5aec-4997-971a-1b1b83cbe0bd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:28:57.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3788" for this suite. May 11 19:29:04.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:29:04.212: INFO: namespace downward-api-3788 deletion completed in 6.297903347s • [SLOW TEST:13.205 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:29:04.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 11 19:29:04.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 11 19:29:04.418: INFO: stderr: "" May 11 19:29:04.418: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:29:04.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7505" for this suite. May 11 19:29:12.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:29:12.788: INFO: namespace kubectl-7505 deletion completed in 8.364622741s • [SLOW TEST:8.576 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:29:12.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-b19fcd90-53d9-4f76-9fa5-382b5d1d5a61 STEP: Creating a pod to test consume secrets May 11 19:29:13.575: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-af8c5884-ef5f-49c7-b199-86ded8f448ac" in namespace "projected-6157" to be "success or failure" May 11 19:29:13.623: INFO: Pod "pod-projected-secrets-af8c5884-ef5f-49c7-b199-86ded8f448ac": Phase="Pending", Reason="", readiness=false. Elapsed: 47.951125ms May 11 19:29:15.625: INFO: Pod "pod-projected-secrets-af8c5884-ef5f-49c7-b199-86ded8f448ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05050944s May 11 19:29:17.629: INFO: Pod "pod-projected-secrets-af8c5884-ef5f-49c7-b199-86ded8f448ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054213919s May 11 19:29:19.641: INFO: Pod "pod-projected-secrets-af8c5884-ef5f-49c7-b199-86ded8f448ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065968729s May 11 19:29:21.645: INFO: Pod "pod-projected-secrets-af8c5884-ef5f-49c7-b199-86ded8f448ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070296344s STEP: Saw pod success May 11 19:29:21.645: INFO: Pod "pod-projected-secrets-af8c5884-ef5f-49c7-b199-86ded8f448ac" satisfied condition "success or failure" May 11 19:29:21.648: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-af8c5884-ef5f-49c7-b199-86ded8f448ac container projected-secret-volume-test: STEP: delete the pod May 11 19:29:21.683: INFO: Waiting for pod pod-projected-secrets-af8c5884-ef5f-49c7-b199-86ded8f448ac to disappear May 11 19:29:21.699: INFO: Pod pod-projected-secrets-af8c5884-ef5f-49c7-b199-86ded8f448ac no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:29:21.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6157" for this suite. May 11 19:29:27.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:29:27.783: INFO: namespace projected-6157 deletion completed in 6.080334311s • [SLOW TEST:14.994 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:29:27.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 19:29:27.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f393f8e-85a2-4a34-8313-a9cb398e7e5f" in namespace "downward-api-8101" to be "success or failure" May 11 19:29:28.072: INFO: Pod "downwardapi-volume-2f393f8e-85a2-4a34-8313-a9cb398e7e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 81.814054ms May 11 19:29:30.075: INFO: Pod "downwardapi-volume-2f393f8e-85a2-4a34-8313-a9cb398e7e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084702683s May 11 19:29:32.210: INFO: Pod "downwardapi-volume-2f393f8e-85a2-4a34-8313-a9cb398e7e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22002246s May 11 19:29:34.215: INFO: Pod "downwardapi-volume-2f393f8e-85a2-4a34-8313-a9cb398e7e5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.225128476s STEP: Saw pod success May 11 19:29:34.216: INFO: Pod "downwardapi-volume-2f393f8e-85a2-4a34-8313-a9cb398e7e5f" satisfied condition "success or failure" May 11 19:29:34.219: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2f393f8e-85a2-4a34-8313-a9cb398e7e5f container client-container: STEP: delete the pod May 11 19:29:34.397: INFO: Waiting for pod downwardapi-volume-2f393f8e-85a2-4a34-8313-a9cb398e7e5f to disappear May 11 19:29:34.723: INFO: Pod downwardapi-volume-2f393f8e-85a2-4a34-8313-a9cb398e7e5f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:29:34.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8101" for this suite. May 11 19:29:40.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:29:41.077: INFO: namespace downward-api-8101 deletion completed in 6.350147348s • [SLOW TEST:13.294 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:29:41.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 11 19:29:41.180: INFO: Waiting up to 5m0s for pod "client-containers-efc748e9-03d1-4ad3-994d-9401d3cefe52" in namespace "containers-3209" to be "success or failure" May 11 19:29:41.189: INFO: Pod "client-containers-efc748e9-03d1-4ad3-994d-9401d3cefe52": Phase="Pending", Reason="", readiness=false. Elapsed: 9.565132ms May 11 19:29:43.234: INFO: Pod "client-containers-efc748e9-03d1-4ad3-994d-9401d3cefe52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053876888s May 11 19:29:45.250: INFO: Pod "client-containers-efc748e9-03d1-4ad3-994d-9401d3cefe52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070098851s May 11 19:29:48.712: INFO: Pod "client-containers-efc748e9-03d1-4ad3-994d-9401d3cefe52": Phase="Pending", Reason="", readiness=false. Elapsed: 7.531859036s May 11 19:29:50.714: INFO: Pod "client-containers-efc748e9-03d1-4ad3-994d-9401d3cefe52": Phase="Running", Reason="", readiness=true. Elapsed: 9.534565979s May 11 19:29:52.718: INFO: Pod "client-containers-efc748e9-03d1-4ad3-994d-9401d3cefe52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.537997148s STEP: Saw pod success May 11 19:29:52.718: INFO: Pod "client-containers-efc748e9-03d1-4ad3-994d-9401d3cefe52" satisfied condition "success or failure" May 11 19:29:52.720: INFO: Trying to get logs from node iruya-worker pod client-containers-efc748e9-03d1-4ad3-994d-9401d3cefe52 container test-container: STEP: delete the pod May 11 19:29:53.527: INFO: Waiting for pod client-containers-efc748e9-03d1-4ad3-994d-9401d3cefe52 to disappear May 11 19:29:54.031: INFO: Pod client-containers-efc748e9-03d1-4ad3-994d-9401d3cefe52 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:29:54.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3209" for this suite. May 11 19:30:02.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:30:02.425: INFO: namespace containers-3209 deletion completed in 8.390042153s • [SLOW TEST:21.348 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:30:02.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-f71533e4-0ecb-4838-a522-928fae428f1e STEP: Creating a pod to test consume secrets May 11 19:30:04.692: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1d4bcbea-31f2-4a4f-b914-1de37d06d00b" in namespace "projected-2617" to be "success or failure" May 11 19:30:04.925: INFO: Pod "pod-projected-secrets-1d4bcbea-31f2-4a4f-b914-1de37d06d00b": Phase="Pending", Reason="", readiness=false. Elapsed: 232.843873ms May 11 19:30:06.959: INFO: Pod "pod-projected-secrets-1d4bcbea-31f2-4a4f-b914-1de37d06d00b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.266474037s May 11 19:30:09.024: INFO: Pod "pod-projected-secrets-1d4bcbea-31f2-4a4f-b914-1de37d06d00b": Phase="Running", Reason="", readiness=true. Elapsed: 4.33139677s May 11 19:30:11.267: INFO: Pod "pod-projected-secrets-1d4bcbea-31f2-4a4f-b914-1de37d06d00b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.574176574s STEP: Saw pod success May 11 19:30:11.267: INFO: Pod "pod-projected-secrets-1d4bcbea-31f2-4a4f-b914-1de37d06d00b" satisfied condition "success or failure" May 11 19:30:11.270: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-1d4bcbea-31f2-4a4f-b914-1de37d06d00b container projected-secret-volume-test: STEP: delete the pod May 11 19:30:11.535: INFO: Waiting for pod pod-projected-secrets-1d4bcbea-31f2-4a4f-b914-1de37d06d00b to disappear May 11 19:30:11.803: INFO: Pod pod-projected-secrets-1d4bcbea-31f2-4a4f-b914-1de37d06d00b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:30:11.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2617" for this suite. May 11 19:30:20.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:30:20.221: INFO: namespace projected-2617 deletion completed in 8.415242071s • [SLOW TEST:17.796 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:30:20.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 11 19:30:20.357: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:30:28.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7830" for this suite. May 11 19:30:36.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:30:36.793: INFO: namespace init-container-7830 deletion completed in 8.459646446s • [SLOW TEST:16.571 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:30:36.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-e35b2147-4a09-41c2-ba44-658b41f3229f STEP: Creating a pod to test consume secrets May 11 19:30:36.898: INFO: Waiting up to 5m0s for pod "pod-secrets-dff449e1-f2a6-49e1-ac5e-77f744146277" in namespace "secrets-7699" to be "success or failure" May 11 19:30:36.930: INFO: Pod "pod-secrets-dff449e1-f2a6-49e1-ac5e-77f744146277": Phase="Pending", Reason="", readiness=false. Elapsed: 31.878507ms May 11 19:30:39.000: INFO: Pod "pod-secrets-dff449e1-f2a6-49e1-ac5e-77f744146277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101560377s May 11 19:30:41.074: INFO: Pod "pod-secrets-dff449e1-f2a6-49e1-ac5e-77f744146277": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175351906s May 11 19:30:43.078: INFO: Pod "pod-secrets-dff449e1-f2a6-49e1-ac5e-77f744146277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.179258693s STEP: Saw pod success May 11 19:30:43.078: INFO: Pod "pod-secrets-dff449e1-f2a6-49e1-ac5e-77f744146277" satisfied condition "success or failure" May 11 19:30:43.080: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-dff449e1-f2a6-49e1-ac5e-77f744146277 container secret-volume-test: STEP: delete the pod May 11 19:30:43.360: INFO: Waiting for pod pod-secrets-dff449e1-f2a6-49e1-ac5e-77f744146277 to disappear May 11 19:30:43.499: INFO: Pod pod-secrets-dff449e1-f2a6-49e1-ac5e-77f744146277 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:30:43.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7699" for this suite. May 11 19:30:51.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:30:51.623: INFO: namespace secrets-7699 deletion completed in 8.12020281s • [SLOW TEST:14.830 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:30:51.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 11 19:30:51.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1702 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 11 19:31:01.309: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0511 19:31:01.232744 3719 log.go:172] (0xc000c0e4d0) (0xc0000d63c0) Create stream\nI0511 19:31:01.232776 3719 log.go:172] (0xc000c0e4d0) (0xc0000d63c0) Stream added, broadcasting: 1\nI0511 19:31:01.234369 3719 log.go:172] (0xc000c0e4d0) Reply frame received for 1\nI0511 19:31:01.234390 3719 log.go:172] (0xc000c0e4d0) (0xc0000d6460) Create stream\nI0511 19:31:01.234396 3719 log.go:172] (0xc000c0e4d0) (0xc0000d6460) Stream added, broadcasting: 3\nI0511 19:31:01.234972 3719 log.go:172] (0xc000c0e4d0) Reply frame received for 3\nI0511 19:31:01.235004 3719 log.go:172] (0xc000c0e4d0) (0xc000718820) Create stream\nI0511 19:31:01.235019 3719 log.go:172] (0xc000c0e4d0) (0xc000718820) Stream added, broadcasting: 5\nI0511 19:31:01.235623 3719 log.go:172] (0xc000c0e4d0) Reply frame received for 5\nI0511 19:31:01.235656 3719 log.go:172] (0xc000c0e4d0) (0xc000876000) Create stream\nI0511 19:31:01.235671 3719 log.go:172] (0xc000c0e4d0) (0xc000876000) Stream added, broadcasting: 7\nI0511 19:31:01.236223 3719 log.go:172] (0xc000c0e4d0) Reply frame received for 7\nI0511 19:31:01.236740 3719 log.go:172] (0xc0000d6460) (3) Writing data frame\nI0511 19:31:01.236845 3719 log.go:172] (0xc0000d6460) (3) Writing data frame\nI0511 19:31:01.241862 3719 log.go:172] (0xc000c0e4d0) Data frame received for 5\nI0511 19:31:01.241883 3719 log.go:172] (0xc000718820) (5) Data frame handling\nI0511 19:31:01.241908 3719 log.go:172] (0xc000718820) (5) Data frame sent\nI0511 19:31:01.242253 3719 log.go:172] (0xc000c0e4d0) Data frame received for 5\nI0511 19:31:01.242268 3719 log.go:172] (0xc000718820) (5) Data frame handling\nI0511 19:31:01.242280 3719 log.go:172] (0xc000718820) (5) Data frame sent\nI0511 19:31:01.278197 3719 log.go:172] (0xc000c0e4d0) Data frame received for 1\nI0511 19:31:01.278232 3719 log.go:172] (0xc0000d63c0) (1) Data frame handling\nI0511 19:31:01.278247 3719 log.go:172] (0xc0000d63c0) (1) Data frame sent\nI0511 19:31:01.278265 3719 log.go:172] (0xc000c0e4d0) Data frame received for 5\nI0511 19:31:01.278276 3719 log.go:172] (0xc000718820) (5) Data frame handling\nI0511 19:31:01.278294 3719 log.go:172] (0xc000c0e4d0) Data frame received for 7\nI0511 19:31:01.278307 3719 log.go:172] (0xc000876000) (7) Data frame handling\nI0511 19:31:01.278733 3719 log.go:172] (0xc000c0e4d0) (0xc0000d63c0) Stream removed, broadcasting: 1\nI0511 19:31:01.278764 3719 log.go:172] (0xc000c0e4d0) (0xc0000d6460) Stream removed, broadcasting: 3\nI0511 19:31:01.278782 3719 log.go:172] (0xc000c0e4d0) Go away received\nI0511 19:31:01.278830 3719 log.go:172] (0xc000c0e4d0) (0xc0000d63c0) Stream removed, broadcasting: 1\nI0511 19:31:01.278849 3719 log.go:172] (0xc000c0e4d0) (0xc0000d6460) Stream removed, broadcasting: 3\nI0511 19:31:01.279016 3719 log.go:172] (0xc000c0e4d0) (0xc000718820) Stream removed, broadcasting: 5\nI0511 19:31:01.279035 3719 log.go:172] (0xc000c0e4d0) (0xc000876000) Stream removed, broadcasting: 7\n" May 11 19:31:01.309: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:31:03.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1702" for this suite. May 11 19:31:13.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:31:13.614: INFO: namespace kubectl-1702 deletion completed in 10.267159778s • [SLOW TEST:21.990 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:31:13.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 19:31:14.082: INFO: Waiting up to 5m0s for pod "pod-3da44213-7e46-4a18-aeea-4e8abd8a7842" in namespace "emptydir-3179" to be "success or failure" May 11 19:31:14.150: INFO: Pod "pod-3da44213-7e46-4a18-aeea-4e8abd8a7842": Phase="Pending", Reason="", readiness=false. Elapsed: 67.245608ms May 11 19:31:16.152: INFO: Pod "pod-3da44213-7e46-4a18-aeea-4e8abd8a7842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069955484s May 11 19:31:18.541: INFO: Pod "pod-3da44213-7e46-4a18-aeea-4e8abd8a7842": Phase="Pending", Reason="", readiness=false. Elapsed: 4.458716396s May 11 19:31:20.544: INFO: Pod "pod-3da44213-7e46-4a18-aeea-4e8abd8a7842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.461777752s STEP: Saw pod success May 11 19:31:20.544: INFO: Pod "pod-3da44213-7e46-4a18-aeea-4e8abd8a7842" satisfied condition "success or failure" May 11 19:31:20.546: INFO: Trying to get logs from node iruya-worker2 pod pod-3da44213-7e46-4a18-aeea-4e8abd8a7842 container test-container: STEP: delete the pod May 11 19:31:20.756: INFO: Waiting for pod pod-3da44213-7e46-4a18-aeea-4e8abd8a7842 to disappear May 11 19:31:20.826: INFO: Pod pod-3da44213-7e46-4a18-aeea-4e8abd8a7842 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:31:20.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3179" for this suite. May 11 19:31:27.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:31:27.566: INFO: namespace emptydir-3179 deletion completed in 6.735597739s • [SLOW TEST:13.952 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:31:27.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0511 19:32:08.123462 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 19:32:08.123: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:32:08.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4881" for this suite. May 11 19:32:24.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:32:24.418: INFO: namespace gc-4881 deletion completed in 16.293055802s • [SLOW TEST:56.852 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:32:24.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-283545aa-1a37-478f-a416-d27cce5ab4bb STEP: Creating a pod to test consume secrets May 11 19:32:27.119: INFO: Waiting up to 5m0s for pod "pod-secrets-9cd94cc9-9ac0-494f-921c-ed7c24dd42c2" in namespace "secrets-4921" to be "success or failure" May 11 19:32:27.146: INFO: Pod "pod-secrets-9cd94cc9-9ac0-494f-921c-ed7c24dd42c2": Phase="Pending", Reason="", readiness=false. Elapsed: 27.02153ms May 11 19:32:29.149: INFO: Pod "pod-secrets-9cd94cc9-9ac0-494f-921c-ed7c24dd42c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030307973s May 11 19:32:31.153: INFO: Pod "pod-secrets-9cd94cc9-9ac0-494f-921c-ed7c24dd42c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033884176s May 11 19:32:33.155: INFO: Pod "pod-secrets-9cd94cc9-9ac0-494f-921c-ed7c24dd42c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036154725s STEP: Saw pod success May 11 19:32:33.155: INFO: Pod "pod-secrets-9cd94cc9-9ac0-494f-921c-ed7c24dd42c2" satisfied condition "success or failure" May 11 19:32:33.157: INFO: Trying to get logs from node iruya-worker pod pod-secrets-9cd94cc9-9ac0-494f-921c-ed7c24dd42c2 container secret-volume-test: STEP: delete the pod May 11 19:32:33.279: INFO: Waiting for pod pod-secrets-9cd94cc9-9ac0-494f-921c-ed7c24dd42c2 to disappear May 11 19:32:33.349: INFO: Pod pod-secrets-9cd94cc9-9ac0-494f-921c-ed7c24dd42c2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:32:33.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4921" for this suite. May 11 19:32:43.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:32:43.604: INFO: namespace secrets-4921 deletion completed in 10.252037553s STEP: Destroying namespace "secret-namespace-7611" for this suite. May 11 19:32:51.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:32:51.961: INFO: namespace secret-namespace-7611 deletion completed in 8.357516765s • [SLOW TEST:27.543 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:32:51.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 11 19:32:52.262: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 11 19:32:53.065: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 11 19:32:55.406: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:32:57.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:32:59.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:33:01.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:33:03.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:33:05.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:33:07.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:33:09.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822373, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:33:12.704: INFO: Waited 1.258934788s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:33:16.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5149" for this suite. May 11 19:33:25.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:33:25.234: INFO: namespace aggregator-5149 deletion completed in 8.478106594s • [SLOW TEST:33.272 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 19:33:25.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 19:33:27.714: INFO: Waiting up to 5m0s for pod "pod-51a758e2-47d1-4372-9c52-b1e560b6d4f7" in namespace "emptydir-8359" to be "success or failure" May 11 19:33:27.759: INFO: Pod "pod-51a758e2-47d1-4372-9c52-b1e560b6d4f7": Phase="Pending", Reason="", readiness=false. Elapsed: 44.953107ms May 11 19:33:29.795: INFO: Pod "pod-51a758e2-47d1-4372-9c52-b1e560b6d4f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080651814s May 11 19:33:32.154: INFO: Pod "pod-51a758e2-47d1-4372-9c52-b1e560b6d4f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439443635s May 11 19:33:34.159: INFO: Pod "pod-51a758e2-47d1-4372-9c52-b1e560b6d4f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444305681s May 11 19:33:36.162: INFO: Pod "pod-51a758e2-47d1-4372-9c52-b1e560b6d4f7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.448175449s May 11 19:33:38.437: INFO: Pod "pod-51a758e2-47d1-4372-9c52-b1e560b6d4f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.722792972s STEP: Saw pod success May 11 19:33:38.437: INFO: Pod "pod-51a758e2-47d1-4372-9c52-b1e560b6d4f7" satisfied condition "success or failure" May 11 19:33:38.439: INFO: Trying to get logs from node iruya-worker pod pod-51a758e2-47d1-4372-9c52-b1e560b6d4f7 container test-container: STEP: delete the pod May 11 19:33:38.951: INFO: Waiting for pod pod-51a758e2-47d1-4372-9c52-b1e560b6d4f7 to disappear May 11 19:33:39.160: INFO: Pod pod-51a758e2-47d1-4372-9c52-b1e560b6d4f7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 19:33:39.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8359" for this suite. May 11 19:33:47.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:33:47.847: INFO: namespace emptydir-8359 deletion completed in 8.68284252s • [SLOW TEST:22.613 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSMay 11 19:33:47.847: INFO: Running AfterSuite actions on all nodes May 11 19:33:47.847: INFO: Running AfterSuite actions on node 1 May 11 19:33:47.847: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 8085.640 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS